PVC Stuck in Pending
In Kubernetes, PersistentVolumeClaims (PVCs) are the backbone of stateful workloads. When a PVC remains in the Pending state, pods depending on it will not start.
This is expected Kubernetes behavior, not a bug.
No volume = No pod
In production environments, PVC Pending issues are one of the most common root causes for applications that never start, even when:
- Pod YAML is correct
- Deployment looks healthy
- Nodes are available
What Does PVC Pending Actually Mean?
When a PVC is in Pending, Kubernetes is saying:
"I understand the storage request, but I cannot find or create a volume that satisfies it."
There is no mystery. Kubernetes is waiting, not failing.
Kubernetes Storage Flow (Very Important)
Before troubleshooting, understand this flow clearly:
Pod → PVC → PV → Storage Backend (Ceph / EBS / NFS)
| Component | Meaning |
|---|---|
| Pod | The app that needs storage |
| PVC | Request for storage |
| PV | Actual disk |
| StorageClass | Rules on how to create the disk |
- Pod requests storage
- PVC claims storage
- PV represents actual storage
- Backend creates/provides the disk
If any layer breaks, the pod will wait indefinitely.
Real-World Scenario (Ceph RBD)
You deploy a database pod that requires persistent storage.
PVC request:
storageClassName: csi-rbd-sc
resources:
requests:
storage: 20Gi
Pod status:
Pending- Never starts
- No error in pod logs
Why? Because Kubernetes is waiting for storage.
What Kubernetes Checks Internally (In Order)
1. Does a Matching PersistentVolume Already Exist?
Kubernetes looks for a PV that:
- Is Available
- Has ≥ 20Gi
- Uses StorageClass
csi-rbd-sc
In Ceph RBD, PVs are usually not pre-created. Dynamic provisioning is expected.
If Ceph CSI cannot create a PV, PVC stays Pending.
Common causes:
- Ceph CSI is not running
- Ceph cluster unreachable
- Pool name is wrong
2. Can Kubernetes Create a Volume Automatically?
This depends entirely on the StorageClass.
Example:
provisioner: rbd.csi.ceph.com
PVC will remain Pending if:
- Ceph CSI pods are not running
- Provisioner name is incorrect
- Required secrets are missing or invalid
- Ceph MONs are unreachable
Important: StorageClass existing ≠ Storage working
3. Kubernetes Tried, but Ceph Rejected the Request
This is the most confusing case.
Kubernetes:
- Sent the request correctly
- CSI responded
- Ceph said NO
Common reasons:
- Ceph pool does not exist
- Client key has insufficient permissions
- Ceph cluster health is
HEALTH_ERR - RBD quota exceeded
- Network blocked to Ceph MONs
PVC does not fail loudly — it simply waits in Pending.
Where to See the Real Error
Always check:
kubectl describe pvc <pvc-name>
Look under Events:
failed to provision volume
Events always tell the truth.
Step-by-Step Troubleshooting
Step 1: Check PVC Status
kubectl get pvc
kubectl describe pvc <pvc-name>
- Confirm status is
Pending - Read Events carefully
Step 2: Verify StorageClass
kubectl get storageclass csi-rbd-sc -o yaml
Confirm:
- StorageClass name is correct
- Provisioner is
rbd.csi.ceph.com - Pool name exists in Ceph
- Secret references are valid
Step 3: Check PersistentVolumes
kubectl get pv
- No PV created → Ceph CSI failed
- PVC will wait until PV exists
Step 4: Check Ceph CSI Logs (Advanced)
kubectl logs -n kube-system <csi-rbdplugin-provisioner-pod>
This is where backend Ceph errors live:
- Permission issues
- Pool not found
- Authentication failures
Key Rule
Pods do not start until storage is ready.
Kubernetes blocks them intentionally to prevent:
- Data corruption
- Partial startup
- Inconsistent state
This is a safety feature, not a limitation.
Final Recommendation
If your application pod is:
PendingContainerCreating- Stuck without logs
Always check the PVC first.