Mikrok8s storage cl...
 
Notifications
Clear all

[Solved] Mikrok8s storage cluster with Microceph, rook-ceph, PVC pending, waiting for a volume to be created by external provisioner

40 Posts
2 Users
3 Reactions
1,175 Views
Brandon Lee
(@brandon-lee)
Posts: 341
Member Admin
Topic starter
 

@mscrocdile I'm going to setup a lab really quick to reproduce and follow steps, should be able to see if I run into the same issue with the steps. Also, your cluster is healthy and nodes ready?

microk8s kubectl get nodes
 
Posted : 19/12/2023 9:40 am
(@mscrocdile)
Posts: 33
Eminent Member
 

@brandon-lee 

image
 
Posted : 19/12/2023 9:42 am
Brandon Lee
(@brandon-lee)
Posts: 341
Member Admin
Topic starter
 

@mscrocdile got it...looks like everything should be good...I'm going to go through setting up a little cluster in the lab and let you know what I find

 
Posted : 19/12/2023 10:02 am
Brandon Lee
(@brandon-lee)
Posts: 341
Member Admin
Topic starter
 

@mscrocdile I just built a quick little cluster in the lab and was able to get it to work. I am using the latest microk8s 1.29. I am adding (1) local disk to each VM in ubuntu as Ceph storage. What type of storage are you using for Ceph?

image
 
Posted : 19/12/2023 11:54 am
Brandon Lee
(@brandon-lee)
Posts: 341
Member Admin
Topic starter
 

@mscrocdile also in my testing since it is the only storage provisioner on my cluster, and looks like yours as well, I didn't have to set as default so that was a red herring.

 
Posted : 19/12/2023 11:55 am
(@mscrocdile)
Posts: 33
Eminent Member
 

@brandon-lee 

i use MicroK8s v1.28.3. Could that be the difference?

I was also adding 1 local drive into each hyperv Ubuntu VM.

 

 
Posted : 19/12/2023 12:11 pm
Brandon Lee reacted
Brandon Lee
(@brandon-lee)
Posts: 341
Member Admin
Topic starter
 

@mscrocdile Well, i would say that isn't in play since when i wrote my blog post, I was using 1.28. However, you might try, if you have quick and easy snapshots to roll back to, with 1.29 and see what you get. Also, I am guessing the disks are showing up correctly in Ubuntu for use with Ceph....I would think so since your Ceph status shows Healthy.

 
Posted : 19/12/2023 12:13 pm
(@mscrocdile)
Posts: 33
Eminent Member
 

@brandon-lee 

Everything is ok but it doesn't work Smile

Well, i need to read fairytales for children now.. i will continue tomorrow morning.

I will upgrade to 1.29 if possible and tell you the result.

Thank you for your help.

 
Posted : 19/12/2023 1:33 pm
Brandon Lee reacted
Brandon Lee
(@brandon-lee)
Posts: 341
Member Admin
Topic starter
 

@mscrocdile Curious if you have had a chance to try the cluster with v1.29? Let me know what you find on your ceph configuration. 👍

 
Posted : 19/12/2023 10:51 pm
(@mscrocdile)
Posts: 33
Eminent Member
 

@brandon-lee 

i ran this on each node to update

sudo snap refresh microk8s --classic --channel=1.29/stable

I'm not sure i updated correctly and how it is then in production. Maybe rook-ceph plugin had to be disabled first and then enabled (including microk8s connect-external-ceph) - i  don't know.

However this did not help. PVC is still pending.

 

sudo ceph fs ls
No filesystems enabled

Is this correct there is no filesystem?

I'm also adding microceph status:

image
 
Posted : 20/12/2023 1:18 am
(@mscrocdile)
Posts: 33
Eminent Member
 

Just for sure for comparison - this is my microk8s status:

image
 
Posted : 20/12/2023 1:30 am
(@mscrocdile)
Posts: 33
Eminent Member
 

I wonder if these pendings are correct..

image
 
Posted : 20/12/2023 2:08 am
(@mscrocdile)
Posts: 33
Eminent Member
 

I've installed VM's again and started 

sudo snap install microk8s --classic --channel=1.29/stable

but even if i was waiting more then 20 minutes microk8s status -w is not finished and nodes are not ready

I will return to 1.28 ...

 
Posted : 20/12/2023 4:40 am
(@mscrocdile)
Posts: 33
Eminent Member
 

I thought it will help me to run sudo microceph init at each node. 

I noticed there is also IO section which i never noticed before so i though it will be better now

image

However that PVC is still pending. No change.

There are two pools now and IO disappeared after enabling rook-ceph :

image

 

This post was modified 9 months ago by mscrocdile
 
Posted : 20/12/2023 5:06 am
Brandon Lee
(@brandon-lee)
Posts: 341
Member Admin
Topic starter
 

@mscrocdile I think you are onto something when you posted the status of the rook-ceph pods. Here is what I see:

image

 Also, to answer your question about the file system, this is what I see when running the command (also see the no filesystems enabled):

sudo ceph fs ls
image

See what you see when you run this command:

microk8s kubectl -n rook-ceph logs -l app=rook-ceph-operator

Also, check out their troubleshooting guide, steps through the commands to run:

Ceph Common Issues - Rook Ceph Documentation

 
Posted : 20/12/2023 8:26 am
Page 2 / 3