Skip to content

Kubernetes PV/PVC Hands-On Lab with NFS

This lab demonstrates how to replace an emptyDir volume with a PersistentVolume (PV) and PersistentVolumeClaim (PVC) backed by an NFS export.

We will use the cluster’s NFS server (192.168.1.1) with exports under /opt/{home,software,scratch}. In this lab, we use /opt/scratch.


0. Pre-Check

On the head node:

showmount -e 192.168.1.1

On each Kubernetes node:

mount | egrep '/opt/(home|software|scratch)'

1. Create a Namespace

kubectl create ns pv-lab || true

2. PersistentVolume (PV)

Save the following manifest as pv-nfs-scratch.yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs-scratch
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  mountOptions:
    - nfsvers=4.2
    - nolock
  nfs:
    server: 192.168.1.1
    path: /opt/scratch

Apply and check

kubectl apply -f pv-nfs-scratch.yaml
kubectl get pv pv-nfs-scratch -o wide

3. PersistentVolumeClaim (PVC)

Save the following manifest as pvc-scratch.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: scratch-pvc
  namespace: pv-lab
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 512Mi
  volumeName: pv-nfs-scratch

Apply and test:

kubectl apply -f pvc-scratch.yaml
kubectl -n pv-lab get pvc scratch-pvc -o wide

4. Pod with PVC

This pod has two containers sharing the same PVC, just like the emptyDir example.

Save as pod-pvc.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: pvc-pod
  namespace: pv-lab
spec:
  securityContext:
    fsGroup: 2000
  containers:
  - name: my-app-container
    image: nginx
    volumeMounts:
    - name: shared-data
      mountPath: /var/data
  - name: my-sidecar-container
    image: busybox
    command: ["sh", "-c", "echo 'hello from sidecar' > /shared/file.txt && sleep 3600"]
    volumeMounts:
    - name: shared-data
      mountPath: /shared
  volumes:
  - name: shared-data
    persistentVolumeClaim:
      claimName: scratch-pvc

Apply:

kubectl apply -f pod-pvc.yaml
kubectl -n pv-lab get pod pvc-pod -o wide

Validate:

kubectl -n pv-lab exec pvc-pod -c my-sidecar-container -- cat /shared/file.txt
kubectl -n pv-lab exec pvc-pod -c my-app-container -- ls -l /var/data
ls /opt/scratch

5. Multi-Pod RWX Proof

Deploy a second pod that appends to the same file.

Save as pod-pvc-reader.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: pvc-pod-reader
  namespace: pv-lab
spec:
  containers:
  - name: tailer
    image: busybox
    command: ["sh","-c","while true; do date >> /mnt/file.txt; sleep 2; done"]
    volumeMounts:
    - name: shared-data
      mountPath: /mnt
  volumes:
  - name: shared-data
    persistentVolumeClaim:
      claimName: scratch-pvc

Apply:

kubectl apply -f pod-pvc-reader.yaml

Check from the first pod:

kubectl -n pv-lab exec pvc-pod -c my-sidecar-container -- tail -f /shared/file.txt

You should see timestamps being appended by the second pod.


6. Reclaim Policy Demo

Delete the PVC:

kubectl -n pv-lab delete pvc scratch-pvc
kubectl get pv pv-nfs-scratch -o yaml | egrep 'phase:|claimRef'

The PV enters Released phase, but the data remains on NFS (/opt/scratch).

Clear the claim reference if you want to reuse the PV:

kubectl patch pv pv-nfs-scratch --type=json -p='[{"op":"remove","path":"/spec/claimRef"}]'

7. Cleanup

kubectl -n pv-lab delete pod pvc-pod pvc-pod-reader --ignore-not-found
kubectl -n pv-lab delete pvc scratch-pvc --ignore-not-found
kubectl delete pv pv-nfs-scratch --ignore-not-found
kubectl delete ns pv-lab --ignore-not-found