(updated: 2-Oct-2022)
In this tutorial, we will setup dynamic NFS provisioning so that whenever a user needs the storage, the storage will be provisioned automatically without the interference of the cluster administrators. Without the dynamic NFS provisioning, the cluster admin needed to pre-provision the storage manually for the users.
There are several ways of setting up dynamic NFS provisioning. However, by using a Helm chart, we can easily set this up in a minute 😉
Setting NFS server in Host machine
Before proceed, we need to setup NFS server. I will setup NFS server in my Ubuntu machine (22.04). The installation procedure will be different for other Linux distribution. However, the setup procedure is same.
1. Installing nfs server from apt repository
sudo apt update sudo apt install nfs-kernel-server sudo apt install nfs-common
2. Making a directory in host machine where PersistentVolumeClaim (PVC) will be created
sudo mkdir /srv/nfs/kubedata -p sudo chown nobody: /srv/nfs/kubedata/
3. Now we will edit the exports file and add the directory which we created earlier step in order to export it into the remote machine
sudo vi /etc/exports
4. Add the below line, save and exit
/srv/nfs/kubedata *(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)
5. Starting the NFS server and enable it so that it starts automatically in system reboot
sudo systemctl enable nfs-server sudo systemctl start nfs-server sudo systemctl status nfs-server
6. Run the following command which will export the directory to the remote machine
sudo exportfs -rav
7. Now let’s test. Replace 192.168.1.152 to your host machine/nfs server’s IP address
sudo mount -t nfs 192.168.1.152:/srv/nfs/kubedata /mnt mount | grep kubedata
You should see an output like this
192.168.1.152:/srv/nfs/kubedata on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.152,local_lock=none,addr=192.168.1.152)
8. We are good to go! Let’s un-mount it
sudo umount /mnt
Install NFS client packages in nodes
Make sure to install nfs-utils/nfs-common packages in your nodes as well
Setting NFS client provisioner with Helm chart
We will use Helm chart to automate the setup procedure. Just one single command will setup storage class, nfs provisioner pod, deployment & more!
1. Before running the Helm command, make sure your cluster is up and running.
$ kubectl cluster-info Kubernetes master is running at https://127.0.0.1:44929 KubeDNS is running at https://127.0.0.1:44929/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
2. Getting the nodes info
$ kubectl get nodes NAME STATUS ROLES AGE VERSION kind-control-plane Ready master 6m38s v1.19.1 kind-worker Ready <none> 6m3s v1.19.1 kind-worker2 Ready <none> 6m3s v1.19.1
As you can see, I am using a master node and two worker nodes. I setup the multi-node Kubernetes cluster with kind, which is a tool for running the Kubernetes cluster within docker!
3. Install the Helm if you didn’t already. Also make sure that you are using Helm version 3
$ helm version --short v3.2.4+g0ad800e
4. Add the stable repository in Helm repo and update
$ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/ $ helm repo update
5. Now run the following command. Make sure to replace 192.168.1.152 with your NFS server IP address
$ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ --set nfs.server=192.168.1.152 \ --set nfs.path=/srv/nfs/kubedata
6. Verifying the deployed storageclass
$ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION nfs-client cluster.local/nfs-client-provisioner Delete Immediate true standard (default) rancher.io/local-path Delete WaitForFirstConsumer false
In output, we are seeing that “nfs-client” storage client has been created!
$ kubectl get all NAME READY STATUS RESTARTS AGE pod/nfs-client-provisioner-7f85bccf5-v6jgz 1/1 Running 0 3m48s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 30m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nfs-client-provisioner 1/1 1 1 3m48s NAME DESIRED CURRENT READY AGE replicaset.apps/nfs-client-provisioner-7f85bccf5 1 1 1 3m48s
7. Disabling default storage class for “standard” and making “nfs-client” as default
kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
8. Check the default storage class
$ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-client (default) cluster.local/nfs-client-provisioner Delete Immediate true 83m standard rancher.io/local-path Delete WaitForFirstConsumer false 110m
Default storage class is now set to nfs-client. Cool.
Also other resources are created automatically! Setup part is done. It’s really that simple 😎, thanks to Helm 3
9. Now we will test our setup. Currently, there is no PersistentVolume (PV) and PersistentVolumeClaims (PVC) in our cluster
$ kubectl get pv,pvc No resources found in default namespace.
Also check the  /srv/nfs/kubedata/ directory, which will be empty
10. We will create a PVC to allocate 100M in our nfs-client storage. Create a file pvc-nfs.yaml and paste the following code:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvctest spec: storageClassName: nfs-client accessModes: - ReadWriteMany resources: requests: storage: 100Mi
11. Create the PVC and test
$ kubectl create -f pvc-nfs.yaml persistentvolumeclaim/pvctest created
$ kubectl get pvc,pv NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/pvctest Bound pvc-2e5dab38-6ec0-4236-8c2b-54717e63108e 100Mi RWX nfs-client 2m57s NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-2e5dab38-6ec0-4236-8c2b-54717e63108e 100Mi RWX Delete Bound default/pvctest nfs-client 2m57s
$ ls /srv/nfs/kubedata/ default-pvctest-pvc-2e5dab38-6ec0-4236-8c2b-54717e63108e
In above outputs, we are seeing that PVC and PV has been created automatically and they are bound.
12. Now we will create a busybox pod attached to that PVC. Paste the following code in busybox-pv-nfs.yaml file
apiVersion: v1 kind: Pod metadata: name: busybox spec: volumes: - name: myvol persistentVolumeClaim: claimName: pvctest containers: - image: busybox name: busybox command: ["/bin/sh"] args: ["-c", "sleep 600"] volumeMounts: - name: myvol mountPath: /data
13. Execute the below command to create the busybox pod
kubectl create -f busybox-pv-nfs.yaml
$ kubectl get pods NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 53s nfs-client-provisioner-7f85bccf5-v6jgz 1/1 Running 0 21m $ kubectl describe pod busybox ------------ Volumes: myvol: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: pvctest ReadOnly: false ------------
In above, we can confirm that the busybox pod that we created is indeed using the PVC.
So by using the Helm chart, we can easily deploy the NFS provisioner which helps dynamic provisioning of PV and PVC. No more extra work for cluster admin. Phew 😎
cool & perfect
very good