Mikrok8s storage cl...
 
Notifications
Clear all

[Solved] Mikrok8s storage cluster with Microceph, rook-ceph, PVC pending, waiting for a volume to be created by external provisioner

40 Posts
2 Users
3 Reactions
1,170 Views
Brandon Lee
(@brandon-lee)
Posts: 340
Member Admin
Topic starter
 

Starting a new forum thread here. I had this question: 

Hello, I have 3 nodes with Microk8s. I have installed Microceph - health ok ( https://canonical-microceph.readthedocs-hosted.com/en/latest/tutorial/multi-node/ ) I have enabled rook-ceph. Then microk8s connect-external-ceph Everything is ok. When i create pod with pvc then it is still pending. I don't know what else i should check.

One of the first things to check:

kubectl get sc
 
Posted : 19/12/2023 8:26 am
(@mscrocdile)
Posts: 33
Eminent Member
 

kubectl get sc

NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ceph-rbd rook-ceph.rbd.csi.ceph.com Delete Immediate true 6h14m

 
Posted : 19/12/2023 8:42 am
Brandon Lee reacted
Brandon Lee
(@brandon-lee)
Posts: 340
Member Admin
Topic starter
 

@mscrocdile would you be able to post a screenshot of that output as well? You can copy and paste images here as well.

 
Posted : 19/12/2023 8:43 am
(@mscrocdile)
Posts: 33
Eminent Member
 

@brandon-lee 

image
 
Posted : 19/12/2023 8:45 am
Brandon Lee
(@brandon-lee)
Posts: 340
Member Admin
Topic starter
 

@mscrocdile ok it may not be recognizing it as the default....let's run this:

 

kubectl patch storageclass ceph-rbd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
 
Posted : 19/12/2023 8:52 am
Brandon Lee
(@brandon-lee)
Posts: 340
Member Admin
Topic starter
 

@mscrocdile, Actually just remembered, we need to prepend microk8s to that command:

microk8s kubectl patch storageclass ceph-rbd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
 
Posted : 19/12/2023 8:59 am
(@mscrocdile)
Posts: 33
Eminent Member
 

@brandon-lee I did it and this is the result. 

image

(That pvc is still pending)

This post was modified 9 months ago by mscrocdile
 
Posted : 19/12/2023 9:01 am
Brandon Lee
(@brandon-lee)
Posts: 340
Member Admin
Topic starter
 

@mscrocdile ok good, it is now showing as default and what we want. Let's delete out the existing pvcs and then recreate again. I don't think it will retro create those that are pending or I have seen that issue before. You can get your existing PVCs with:

microk8s kubectl get pvc

then 

microk8s kubectl delete pvc <PVC name>
 
Posted : 19/12/2023 9:05 am
(@mscrocdile)
Posts: 33
Eminent Member
 

@brandon-lee unfortunately it is still pending.

I deleted both pvc and pod. And created again.

This is what i use

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Mi
  storageClassName: ceph-rbd
---
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  volumes:
    - name: my-storage
      persistentVolumeClaim:
        claimName: my-pvc
  containers:
    - name: nginx
      image: nginx:1.14.2
      ports:
        - containerPort: 80
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: my-storage
 
Posted : 19/12/2023 9:09 am
Brandon Lee
(@brandon-lee)
Posts: 340
Member Admin
Topic starter
 

@mscrocdile I think you mentioned that everything was healthy. Can you post output of:

ceph status

Then also run a:

microk8s kubectl describe pvc 

Also, this is my steps that I used to get ceph-rbd working, I'm not sure if any of these steps differ from what you did?

Kubernetes Persistent Volume Setup with Microk8s Rook and Ceph - Virtualization Howto

 
Posted : 19/12/2023 9:18 am
(@mscrocdile)
Posts: 33
Eminent Member
 

@brandon-lee 

ceph status:

image

pvc description:

image
 
Posted : 19/12/2023 9:22 am
Brandon Lee
(@brandon-lee)
Posts: 340
Member Admin
Topic starter
 

@mscrocdile Did you run the following command by chance?

microk8s connect-external-ceph
 
Posted : 19/12/2023 9:28 am
(@mscrocdile)
Posts: 33
Eminent Member
 

@brandon-lee 

yes, i saved the text from the result before:

Looking for MicroCeph on the host
Detected existing MicroCeph installation
Attempting to connect to Ceph cluster
Successfully connected to be7fbaa3-32f4-41db-8691-fdbc00b75044 (192.168.30.51:0/1602486096)
Creating pool microk8s-rbd0 in Ceph cluster
Configuring pool microk8s-rbd0 for RBD
Successfully configured pool microk8s-rbd0 for RBD
Creating namespace rook-ceph-external
namespace/rook-ceph-external created
Configuring Ceph CSI secrets
Successfully configured Ceph CSI secrets
Importing Ceph CSI secrets into MicroK8s
secret/rook-ceph-mon created
configmap/rook-ceph-mon-endpoints created
secret/rook-csi-rbd-node created
secret/rook-csi-rbd-provisioner created
storageclass.storage.k8s.io/ceph-rbd created
Importing external Ceph cluster
NAME: rook-ceph-external
LAST DEPLOYED: Tue Dec 19 07:26:04 2023
NAMESPACE: rook-ceph-external
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Ceph Cluster has been installed. Check its status by running:
kubectl --namespace rook-ceph-external get cephcluster

Visit https://rook.io/docs/rook/latest/CRDs/ceph-cluster-crd/ for more information about the Ceph CRD.

Important Notes:
- You can only deploy a single cluster per namespace
- If you wish to delete this cluster and start fresh, you will also have to wipe the OSD disks using `sfdisk`

=================================================

Successfully imported external Ceph cluster. You can now use the following storageclass
to provision PersistentVolumes using Ceph CSI:

NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ceph-rbd rook-ceph.rbd.csi.ceph.com Delete Immediate true 5s

 
Posted : 19/12/2023 9:30 am
Brandon Lee
(@brandon-lee)
Posts: 340
Member Admin
Topic starter
 

@mscrocdile ok that looks good at least. I don't see any major differences between your YML with the pod you are provisioning, can you delete out your PVCs that are pending, and use this one:

# pod-with-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginx-pvc
spec:
  storageClassName: ceph-rbd
  accessModes: [ReadWriteOnce]
  resources: { requests: { storage: 5Gi } }
---
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  volumes:
    - name: pvc
      persistentVolumeClaim:
        claimName: pod-pvc
  containers:
    - name: nginx
      image: nginx:latest
      ports:
        - containerPort: 80
      volumeMounts:
        - name: pvc
          mountPath: /usr/share/nginx/html

 
Posted : 19/12/2023 9:33 am
(@mscrocdile)
Posts: 33
Eminent Member
 

@brandon-lee 

image
 
Posted : 19/12/2023 9:37 am
Page 1 / 3