(updated: 2-Oct-2022)

In this tutorial, we will setup dynamic NFS provisioning so that whenever a user needs the storage, the storage will be provisioned automatically without the interference of the cluster administrators. Without the dynamic NFS provisioning, the cluster admin needed to pre-provision the storage manually for the users.

There are several ways of setting up dynamic NFS provisioning. However, by using a Helm chart, we can easily set this up in a minute 😉

Setting NFS server in Host machine

Before proceed, we need to setup NFS server. I will setup NFS server in my Ubuntu machine (22.04). The installation procedure will be different for other Linux distribution. However, the setup procedure is same.

  1. Installing nfs server from apt repository
sudo apt update
sudo apt install nfs-kernel-server
sudo apt install nfs-common
  1. Making a directory in host machine where PersistentVolumeClaim (PVC) will be created
sudo mkdir /srv/nfs/kubedata -p
sudo chown nobody: /srv/nfs/kubedata/
  1. Now we will edit the exports file and add the directory which we created earlier step in order to export it into the remote machine
sudo vi /etc/exports
  1. Add the below line, save and exit
/srv/nfs/kubedata    *(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)
  1. Starting the NFS server and enable it so that it starts automatically in system reboot
sudo systemctl enable nfs-server
sudo systemctl start nfs-server
sudo systemctl status nfs-server
  1. Run the following command which will export the directory to the remote machine
sudo exportfs -rav
  1. Now let’s test. Replace to your host machine/nfs server’s IP address
sudo mount -t nfs /mnt
mount | grep kubedata

You should see an output like this on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=,local_lock=none,addr=
  1. We are good to go! Let’s un-mount it
sudo umount /mnt


Install NFS client packages in nodes

Make sure to install nfs-utils/nfs-common packages in your nodes as well


Setting NFS client provisioner with Helm chart

We will use Helm chart to automate the setup procedure. Just one single command will setup storage class, nfs provisioner pod, deployment & more!

  1. Before running the Helm command, make sure your cluster is up and running.
$ kubectl cluster-info                    

Kubernetes master is running at
KubeDNS is running at
  1. Getting the nodes info
$ kubectl get nodes        

NAME                 STATUS   ROLES    AGE     VERSION
kind-control-plane   Ready    master   6m38s   v1.19.1
kind-worker          Ready    <none>   6m3s    v1.19.1
kind-worker2         Ready    <none>   6m3s    v1.19.1

As you can see, I am using a master node and two worker nodes. I setup the multi-node Kubernetes cluster with kind, which is a tool for running the Kubernetes cluster within docker!

  1. Install the Helm if you didn’t already. Also make sure that you are using Helm version 3
$ helm version --short
  1. Add the stable repository in Helm repo and update
$ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
$ helm repo update
  1. Now run the following command. Make sure to replace with your NFS server IP address
$ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ 
            --set nfs.server= \ 
            --set nfs.path=/srv/nfs/kubedata 
  1. Verifying the deployed storageclass
$ kubectl get storageclass     

nfs-client           cluster.local/nfs-client-provisioner   Delete          Immediate              true                   
standard (default)   rancher.io/local-path                  Delete          WaitForFirstConsumer   false                  

In output, we are seeing that “nfs-client” storage client has been created!

$ kubectl get all   
NAME                                         READY   STATUS    RESTARTS   AGE
pod/nfs-client-provisioner-7f85bccf5-v6jgz   1/1     Running   0          3m48s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP    <none>        443/TCP   30m

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nfs-client-provisioner   1/1     1            1           3m48s

NAME                                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nfs-client-provisioner-7f85bccf5   1         1         1       3m48s
  1. Disabling default storage class for “standard” and making “nfs-client” as default
kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'

kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
  1. Check the default storage class
$ kubectl get storageclass                                                                                                      

nfs-client (default)   cluster.local/nfs-client-provisioner   Delete          Immediate              true                   83m
standard               rancher.io/local-path                  Delete          WaitForFirstConsumer   false                  110m

Default storage class is now set to nfs-client. Cool.

Also other resources are created automatically! Setup part is done. It’s really that simple 😎, thanks to Helm 3

  1. Now we will test our setup. Currently, there is no PersistentVolume (PV) and PersistentVolumeClaims (PVC) in our cluster
$ kubectl get pv,pvc
No resources found in default namespace.

Also check the /srv/nfs/kubedata/ directory, which will be empty

  1. We will create a PVC to allocate 100M in our nfs-client storage. Create a file pvc-nfs.yaml and paste the following code:
apiVersion: v1
kind: PersistentVolumeClaim
  name: pvctest
  storageClassName: nfs-client
    - ReadWriteMany
      storage: 100Mi
  1. Create the PVC and test
$ kubectl create -f pvc-nfs.yaml
persistentvolumeclaim/pvctest created
$ kubectl get pvc,pv
NAME                             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/pvctest   Bound    pvc-2e5dab38-6ec0-4236-8c2b-54717e63108e   100Mi      RWX            nfs-client     2m57s

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS   REASON   AGE
persistentvolume/pvc-2e5dab38-6ec0-4236-8c2b-54717e63108e   100Mi      RWX            Delete           Bound    default/pvctest   nfs-client              2m57s
$ ls /srv/nfs/kubedata/


In above outputs, we are seeing that PVC and PV has been created automatically and they are bound.

  1. Now we will create a busybox pod attached to that PVC. Paste the following code in busybox-pv-nfs.yaml file
apiVersion: v1
kind: Pod
  name: busybox
  - name: myvol
      claimName: pvctest
  - image: busybox
    name: busybox
    command: ["/bin/sh"]
    args: ["-c", "sleep 600"]
    - name: myvol
      mountPath: /data
  1. Execute the below command to create the busybox pod
kubectl create -f busybox-pv-nfs.yaml
$ kubectl get pods

NAME                                     READY   STATUS    RESTARTS   AGE
busybox                                  1/1     Running   0          53s
nfs-client-provisioner-7f85bccf5-v6jgz   1/1     Running   0          21m

$ kubectl describe pod busybox

    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvctest
    ReadOnly:   false

In above, we can confirm that the busybox pod that we created is indeed using the PVC.

So by using the Helm chart, we can easily deploy the NFS provisioner which helps dynamic provisioning of PV and PVC. No more extra work for cluster admin. Phew 😎