Kubernetes PV/PVC Hands-On Lab with NFS
This lab demonstrates how to replace an emptyDir
volume with a PersistentVolume (PV)
and PersistentVolumeClaim (PVC) backed by an NFS export.
We will use the cluster’s NFS server (192.168.1.1
) with exports under /opt/{home,software,scratch}
. In this
lab, we use /opt/scratch
.
0. Pre-Check
On the head node:
On each Kubernetes node:
1. Create a Namespace
2. PersistentVolume (PV)
Save the following manifest as pv-nfs-scratch.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-scratch
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- nfsvers=4.2
- nolock
nfs:
server: 192.168.1.1
path: /opt/scratch
Apply and check
3. PersistentVolumeClaim (PVC)
Save the following manifest as pvc-scratch.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: scratch-pvc
namespace: pv-lab
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 512Mi
volumeName: pv-nfs-scratch
Apply and test:
4. Pod with PVC
This pod has two containers sharing the same PVC, just like
the emptyDir
example.
Save as pod-pvc.yaml:
apiVersion: v1
kind: Pod
metadata:
name: pvc-pod
namespace: pv-lab
spec:
securityContext:
fsGroup: 2000
containers:
- name: my-app-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /var/data
- name: my-sidecar-container
image: busybox
command: ["sh", "-c", "echo 'hello from sidecar' > /shared/file.txt && sleep 3600"]
volumeMounts:
- name: shared-data
mountPath: /shared
volumes:
- name: shared-data
persistentVolumeClaim:
claimName: scratch-pvc
Apply:
Validate:
kubectl -n pv-lab exec pvc-pod -c my-sidecar-container -- cat /shared/file.txt
kubectl -n pv-lab exec pvc-pod -c my-app-container -- ls -l /var/data
ls /opt/scratch
5. Multi-Pod RWX Proof
Deploy a second pod that appends to the same file.
Save as pod-pvc-reader.yaml:
apiVersion: v1
kind: Pod
metadata:
name: pvc-pod-reader
namespace: pv-lab
spec:
containers:
- name: tailer
image: busybox
command: ["sh","-c","while true; do date >> /mnt/file.txt; sleep 2; done"]
volumeMounts:
- name: shared-data
mountPath: /mnt
volumes:
- name: shared-data
persistentVolumeClaim:
claimName: scratch-pvc
Apply:
Check from the first pod:
You should see timestamps being appended by the second pod.
6. Reclaim Policy Demo
Delete the PVC:
kubectl -n pv-lab delete pvc scratch-pvc
kubectl get pv pv-nfs-scratch -o yaml | egrep 'phase:|claimRef'
The PV enters Released
phase, but the data remains on NFS (/opt/scratch
).
Clear the claim reference if you want to reuse the PV: