Starting a new forum thread here. I had this question:
Hello, I have 3 nodes with Microk8s. I have installed Microceph - health ok ( https://canonical-microceph.readthedocs-hosted.com/en/latest/tutorial/multi-node/ ) I have enabled rook-ceph. Then microk8s connect-external-ceph Everything is ok. When i create pod with pvc then it is still pending. I don't know what else i should check.
One of the first things to check:
kubectl get sc
kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ceph-rbd rook-ceph.rbd.csi.ceph.com Delete Immediate true 6h14m
@mscrocdile would you be able to post a screenshot of that output as well? You can copy and paste images here as well.
@mscrocdile ok it may not be recognizing it as the default....let's run this:
kubectl patch storageclass ceph-rbd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
@mscrocdile, Actually just remembered, we need to prepend microk8s to that command:
microk8s kubectl patch storageclass ceph-rbd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
@brandon-lee I did it and this is the result.
(That pvc is still pending)
@mscrocdile ok good, it is now showing as default and what we want. Let's delete out the existing pvcs and then recreate again. I don't think it will retro create those that are pending or I have seen that issue before. You can get your existing PVCs with:
microk8s kubectl get pvc then microk8s kubectl delete pvc <PVC name>
@brandon-lee unfortunately it is still pending.
I deleted both pvc and pod. And created again.
This is what i use
--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 50Mi storageClassName: ceph-rbd --- apiVersion: v1 kind: Pod metadata: name: nginx-pod spec: volumes: - name: my-storage persistentVolumeClaim: claimName: my-pvc containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 volumeMounts: - mountPath: "/usr/share/nginx/html" name: my-storage
@mscrocdile I think you mentioned that everything was healthy. Can you post output of:
ceph status
Then also run a:
microk8s kubectl describe pvc
Also, this is my steps that I used to get ceph-rbd working, I'm not sure if any of these steps differ from what you did?
Kubernetes Persistent Volume Setup with Microk8s Rook and Ceph - Virtualization Howto
@mscrocdile Did you run the following command by chance?
microk8s connect-external-ceph
yes, i saved the text from the result before:
Looking for MicroCeph on the host
Detected existing MicroCeph installation
Attempting to connect to Ceph cluster
Successfully connected to be7fbaa3-32f4-41db-8691-fdbc00b75044 (192.168.30.51:0/1602486096)
Creating pool microk8s-rbd0 in Ceph cluster
Configuring pool microk8s-rbd0 for RBD
Successfully configured pool microk8s-rbd0 for RBD
Creating namespace rook-ceph-external
namespace/rook-ceph-external created
Configuring Ceph CSI secrets
Successfully configured Ceph CSI secrets
Importing Ceph CSI secrets into MicroK8s
secret/rook-ceph-mon created
configmap/rook-ceph-mon-endpoints created
secret/rook-csi-rbd-node created
secret/rook-csi-rbd-provisioner created
storageclass.storage.k8s.io/ceph-rbd created
Importing external Ceph cluster
NAME: rook-ceph-external
LAST DEPLOYED: Tue Dec 19 07:26:04 2023
NAMESPACE: rook-ceph-external
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Ceph Cluster has been installed. Check its status by running:
kubectl --namespace rook-ceph-external get cephcluster
Visit https://rook.io/docs/rook/latest/CRDs/ceph-cluster-crd/ for more information about the Ceph CRD.
Important Notes:
- You can only deploy a single cluster per namespace
- If you wish to delete this cluster and start fresh, you will also have to wipe the OSD disks using `sfdisk`
=================================================
Successfully imported external Ceph cluster. You can now use the following storageclass
to provision PersistentVolumes using Ceph CSI:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ceph-rbd rook-ceph.rbd.csi.ceph.com Delete Immediate true 5s
@mscrocdile ok that looks good at least. I don't see any major differences between your YML with the pod you are provisioning, can you delete out your PVCs that are pending, and use this one:
# pod-with-pvc.yaml --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nginx-pvc spec: storageClassName: ceph-rbd accessModes: [ReadWriteOnce] resources: { requests: { storage: 5Gi } } --- apiVersion: v1 kind: Pod metadata: name: nginx spec: volumes: - name: pvc persistentVolumeClaim: claimName: pod-pvc containers: - name: nginx image: nginx:latest ports: - containerPort: 80 volumeMounts: - name: pvc mountPath: /usr/share/nginx/html