How to get Ceph das...
 
Notifications
Clear all

How to get Ceph dashboard working in Proxmox 8

3 Posts
2 Users
2 Reactions
1,864 Views
Posts: 5
Topic starter
(@alpha754293)
Active Member
Joined: 11 months ago

(again, I was referred to this forum via here:

So, for those that are looking to run Ceph (RBD) and/or Ceph FS, I HIGHLY encourage you to download and install/run the Ceph dashboard.

You CAN create replicated RBDs/CephFS from within the Proxmox GUI, but if you want to do more than that (erasure coded pools, for example) and you DON'T want to use the CLI -- then the Ceph dashboard is the way to go.

ย 

A LOT of credit goes to YouTuber "apalrd's adventures" where he taught me how to get it installed and up and running, but he was using Ceph version 16.something at the time.

Note that as of this writing, there may still be a "bug" with Ceph version 18.2 and the dashboad where you might not be able to get the dashboard up and running for you. (cf.

I found this out the hard way when I was setting up my 3-node HA Proxmox cluster over the Christmas break, so had to redo it a few times, but I DO have the Ceph dashboard, fully deployed now.

It works with Ceph version 17.2(.7) (which is what I am currently running)*

In summary, use these commands to install and run said Ceph dashboard:

# apt install -y ceph-mgr-dashboard
# ceph mgr module enable dashboard
# ceph dashboard create-self-signed-cert
# echo MyPassword > password.txt
# ceph dashboard ac-user-create admin -i password.txt administrator
# rm password.txt
# ceph mgr module disable dashboard
# ceph mgr module enable dashboard

ย 

The last two steps is basically to restart the dashboard after you've issued the self-signed certificate and added the admin account.

(*In the Proxmox forums discussion about this issue, they mention that they generally DO NOT recommend people going BACKWARDS in version numbers, especially where and when security updates are concerned. So, apply that philosophy based on your risk tolerance/assessment. For me, I'm just using my Proxmox cluster for Windows (and Linux) AD DC, DNS, and Pi-hole, so security is not absolutely critical to me, for this use case.) You be the judge for what you are able to and/or willing to tolerate, from a security risk perspective.)

After that, you can log into the Ceph dashboard at https:// <<your_ip_address>>:8443

From there, if you want to create an erasure coded CephFS, you can do that there.

Note that for erasure coded CephFS, your metadata pool HAS to be a replicated pool. The Ceph dashboard will yell at you if you try to put the metadata also on an erasure coded pool.

The Proxmox GUI is good for created replicated pools.

But it can't do erasure coded pools at all.

The Ceph Dashboard CAN do that.

You define the erasure coded rule, and then apply it to the erasure coded pool that you want.

It works for both Ceph RBD and CephFS as I am using both in my system. (They use the same partition on the Inland 512 GB NVMe SSD that I have in my stack of three Mini PCs.)

As the video that I think Brandon put out, where he mounts the CephFS onto the Linux clients -- I am not 100% sure what the relative advantage of doing that would be vs. making the CephFS into a NFS export and then just mounting the NFS share. *shrugs*

There might be a use case where you'd want to mount the CephFS directly onto a client rather than going over NFS.

But in either case, it's available.

Hope this helps.

2 Replies
Brandon Lee
Posts: 395
Admin
(@brandon-lee)
Member
Joined: 14 years ago

@alpha754293 Thank you for sharing your insights on the Ceph dashboard in Proxmox. I think this is something that many who want to run Ceph in their lab or production will want to know.

Reply
Posts: 5
Topic starter
(@alpha754293)
Active Member
Joined: 11 months ago

No problem.

Yeah, for my next 3-node mini PC HA cluster, I actually didn't originally plan on running Ceph when I was planning the project, but after seeing that it only had a single 2242 M.2 slot, so I kinda had to do something to improve the reliability of the storage subsystem of the cluster, so it happened more by chance than anything.

But on the up side, because I put the VM and CTs on there -, it ended up making migrations of the CTs/VMs between nodes go significantly faster because it didn't actually have to move any of the data (since the data was already residing on shared storage), so it made live migrations happen a LOT faster.

So even though the systems are connected to each other over GbE and Ceph is connected to each other also over GbE, but by doing it this way, it negated any real need for faster networking. So, yay. Unplanned benefit. Smile

Reply