Kubernetes Persistent Volumes
Persistent Volumes in Kubernetes allow the operators to store data outside the Pods that will be persistent across updates or reboots. It also allows these Persistent Volumes to be mounted on multiple pods.
The lifecycle of a volume and claim and the communication between PVs and PVCs consists of four stages:
- Provisioning
- The process of creating a storage volume, provisioning can be done in two ways: statically or dynamically. Leaseweb Managed Kubernetes chose Dynamic Provisioning which you can find more detail in the Dynamic Provisioning section.
- Binding
- When a PersistentVolumeClaim with a specific amount of storage is requested and with certain access modes, a control loop in the control plane finds a matching PV and binds it together.
- Using
- Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a Pod
- Reclaiming
- When a user is done with the volume, they can delete the PVC objects from the API that allows the reclamation of the resource. To tell what to do with the released PersistentVolumes it is important what to set for the Reclaim Policy. There are three reclaim policies:
- Retain: Manual deletion by the cluster administrator. The volume will not be automatically deleted.
- Delete: The volume will be automatically deleted when the corresponding PersistentVolumeClaim is deleted.
- Recycle: Deprecated. Instead, the recommended approach is to use dynamic provisioning which is available in our clusters. (Not Supported in our environment)
- When a user is done with the volume, they can delete the PVC objects from the API that allows the reclamation of the resource. To tell what to do with the released PersistentVolumes it is important what to set for the Reclaim Policy. There are three reclaim policies:
See the official Kubernetes documentation on Persistent Volumes for more information.
Dynamic Provisioning
The dynamic way of storage provisioning involves creating PVs automatically, using StorageClass instead of the manual provisioning of PersistentVolumes. Kubernetes will dynamically provision a volume specifically for this storage request, with the help of a StorageClass. In the LeaseWeb Managed Kubernetes, we chose this approach and you can find “cloudstack-custom” as the default StorageClass for now.
StorageClass
A StorageClass is a Kubernetes resource that enables dynamic storage provisioning. StorageClass provides a way for administrators to describe the “classes” of storage they offer. Each StorageClass contains the fields “provisioned”, “parameters”, and “reclaimPolicy”, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned.
In Leaseweb Manage Kubernetes, StorageClass “cloudstack-custom” is the one you can choose which is defined as:
Storage Class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cloudstack-custom
parameters:
csi.cloudstack.apache.org/disk-offering-id: <your domain offering id>
provisioner: csi.cloudstack.apache.org
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
Fields | Values | Explanation |
---|---|---|
Provisioner | csi.cloudstack.apache.org | Which is our driver’s name. We are using CloudStack CSI Driver. |
parameter | csi.cloudstack.apache.org/disk-offering-id | The value is automatically set and will be the value of your CloudStack domain disk offering ID |
reclaimPolicy | Delete | PersistentVolumes that are dynamically created by a StorageClass will have the reclaim policy specified in the reclaimPolicy field of the class |
volumeBindingMode | WaitForFirstConsumer | It must be WaitForFirstConsumer, in order to delay the binding and provisioning of a PersistentVolume until a Pod using the PersistentVolumeClaim is created. It enables the provisioning of volumes with respect to topology constraints |
Using Kubernetes Persistent Volumes
- In order to use Persistent Volumes, you must declare a PersistentVolumeClaim in order to request some space.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-pvc
spec:
storageClassName: cloudstack-custom
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
- The only part which is special here as it was mentioned in previous section is “
cloudstack-custom"
which is a pre-defined storage class declared by Leaseweb when we have provisioned your cluster. - The other interesting part is the spec.resources.requests.storage which requests exactly 1 Gi for the volume, which allows choosing the specific size you desire for that volume.
- Then, to use this PersistentVolume in a container, you can use the PersistentVolume’s name to specify in your Pod, such as:
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
containers:
- name: example
image: busybox
volumeMounts:
- mountPath: "/data"
name: example-volume
stdin: true
stdinOnce: true
tty: true
volumes:
- name: example-volume
persistentVolumeClaim:
claimName: example-pvc
- Once you have applied both of these configurations using
kubectl apply -f
, a new pod will be added, and the volume will be mounted under /data.
Persistent Volume Deletion
During Cluster Destruction
When a cluster is being destroyed, all attached PVCs (Persistent Volume Claims) are intended to be deleted along with the corresponding volumes. However, this deletion does not occur in the following two scenarios:
- When Static pods with volumes are deleted without concurrently deleting the related PVC.
- When scaling a Deployment or StatefulSet to zero.
In both cases mentioned above, it is crucial to remove all dangling PVCs or those associated with the Deployment and StatefulSet that have been scaled down before initiating cluster destruction. Neglecting to do this can lead to the persistence of leaked volumes within the infrastructure, consequently incurring additional charges.
Manual PersistentVolume Deletion
To delete a volume in a Kubernetes cluster, you typically delete the Persistent Volume Claim (PVC) that is using the volume. When you delete a PVC, the associated Persistent Volume (PV) and the actual storage resource will be deleted as our reclaimPolicy set to Delete in the StorageClass of the PV.
# Listing PV and PVC
❯ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
example-pvc Bound pvc-fad0126a-811b-4815-823c-7758fe25fd33 1Gi RWO cloudstack-custom 8m
❯ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-fad0126a-811b-4815-823c-7758fe25fd33 1Gi RWO Delete Bound monitoring/example-pvc cloudstack-custom 8m
# Deleting PVC
❯ kubectl delete pvc example-pvc
persistentvolumeclaim "example-pvc" deleted
# As you can see it is completely removed from cluster
❯ kubectl get pv pvc-fad0126a-811b-4815-823c-7758fe25fd33
Error from server (NotFound): persistentvolumes "pvc-fad0126a-811b-4815-823c-7758fe25fd33" not found
Warning
It is recommended to avoid creating an additional StorageClass using the csi.cloudstack.apache.org provisioner with a RetainPolicy set to anything other than Delete. This practice could lead to associated volumes remaining intact post-cluster destruction, incurring charges unless the retention of volumes is intentional. Exercise caution to prevent unintended persisting volumes and subsequent costs.
Information
In the Alpha Release of Managed Kubernetes:
The reclaim policy associated with the Persistent Volume (PV) may be ignored under certain circumstances. For a Bound PV-PVC (Persistent Volume Claim) pair, the sequence of PV and PVC deletion is crucial in determining whether the PV delete reclaim policy is observed.
If the PVC is deleted prior to the PV, then the reclaim policy on the PV is honored. However, if the PV is deleted prior to deleting the PVC, the reclaim policy is not exercised. This behavior results in the associated storage asset in the external infrastructure not being removed.
To ensure the proper application of the reclaim policy, please avoid deleting the PV directly. Instead, always attempt to delete it by deleting the PVC.