Software Defined Storage

Ceph Dashboard Install and Configuration for Microceph

Learn how to install and configure the Ceph Dashboard in Microceph. It let's you monitor your storage using a web GUI

If you are running Microceph, can you run the Ceph Dashboard as you can in full-blown Ceph? Yes you can! The process to spin this up is fairly straightforward with a few commands to run that will allow you to spin up the solution to monitor your Microceph storage. Let’s look at Microceph Ceph Dashboard install and configuration.

What is Microceph?

Just briefly, Microceph is an easier way to install Ceph storage. It abstracts much of the configuration into simpler commands you can run on your Linux hosts. It is a great solution for running HCI storage for a microk8s Kubernetes cluster and I am doing this in the home lab and it has been working great for both a Docker Swarm cluster and Kubernetes as well.

What is the Ceph dashboard?

The Ceph dashboard is a module included with Ceph that provides a web interface for your Ceph environment to be able to login and see a dashboard GUI of your Ceph storage environment. When you login, it will show you an overview of your storage, any errors on the system, and you can also perform certain tasks, like enabling modules, etc.

Ceph dashboard install

The commands to install the Ceph dashboard are not too difficult for Microk8s. Just run the following commands:

##Set the configuration to not use SSL - this seems to be broken with Python dependencies at the moment to use SSL in my testing, but working on this
microceph.ceph config set mgr mgr/dashboard/ssl false

##Enable the dashboard with the command below
microceph.ceph mgr module enable dashboard

##Set your initial admin password for microceph dashboard
echo -n "password" > /var/snap/microceph/current/conf/password.txt

##Create the initial admin account with the password set above
microceph.ceph dashboard ac-user-create -i /etc/ceph/password.txt admin administrator

##Remove the password file
rm /var/snap/microceph/current/conf/password.txt

Let’s now take a look at what the Ceph dashboard looks like.

Looking at the Ceph Dashboard

When you enable the Ceph Dashboard, you will be able to browse to the web port on your Ceph cluster host, port 8080 for non-SSL, and port 8443 for SSL. Login with your password you set in the command examples above.

You will see the following dashboard readout, which gives a great overview of the cluster in general and the ability to quickly click around and view various objects and configurations as part of the Ceph cluster.

Looking at the ceph dashboard
Looking at the ceph dashboard

Below, I have clicked the Cluster > Pools menu item. We can see the cluster pools and also have the ability to create a new pool with the green create button.

Viewing ceph cluster pools
Viewing ceph cluster pools

On the hosts menu itme, we can view our hosts, and what service instances they are running as well as the status for the hosts.

Viewing ceph cluster hosts
Viewing ceph cluster hosts

On the OSDs screen, you can see the Ceph OSDs, the PGs, the size of the OSDs, usage, ;read bytes, write bytes, read ops, and write ops.

Ceph osds viewed in the ceph dashboard
Ceph osds viewed in the ceph dashboard

The Ceph monitors screen shows the status of your Ceph monitors. You can see also if they are in quorum.

Viewing ceph monitors
Viewing ceph monitors

Also, what is really nice is you can see your CSI block objects. Below, you can see the CSI objects which are block objects for Kubernetes pods running persistent volume claims in the cluster. These are provisioned using the rook-ceph integration with Microk8s.

Viewing block devices
Viewing block devices

Also, for those who want to make use of CephFS, you can use the File menu > File systems to display the file systems configured.

Viewing cephfs file information
Viewing cephfs file information

Like the block objects, you can view CephFS file objects as well. Here you can see CSI persistent volume mounts from pods in the cluster. This is great as it gives you the name of the CSI volume, which pool it is located in, the usage, path, mode, and when it was created.

Csi file objects from a kubernetes cluster stored in cephfs
Csi file objects from a kubernetes cluster stored in cephfs

Also handy is the Observability menu with logs. You can view your Ceph logs from this screen. Also, it makes working with your logs extremely easy as you have the ability to search for keywords, set a date or even a time range along with filtering based on the alert priority.

Viewing ceph logs from the dashboard
Viewing ceph logs from the dashboard

Also, we can view our manager modules. You can see which ones you have enabled for Ceph and from this screen, you can edit and enable or disable modules from the Ceph dashboard.

Managing modules
Managing modules

Wrapping up

If you are running Ceph for your Kubernetes cluster, Docker Swarm cluster, or just general file storage, having the Ceph dashboard running is a great tool to having visibility on the health of your Ceph storage and even the tools for configuring things. I did have trouble with enabling the Microceph dashboard with SSL in Ubuntu 22.04 and 24.04, regarding Python dependencies and versions that are used by Microceph. Let me know if you guys have run into this or maybe have a workaround.

Subscribe to VirtualizationHowto via Email ๐Ÿ””

Enter your email address to subscribe to this blog and receive notifications of new posts by email.



Brandon Lee

Brandon Lee is the Senior Writer, Engineer and owner at Virtualizationhowto.com, and a 7-time VMware vExpert, with over two decades of experience in Information Technology. Having worked for numerous Fortune 500 companies as well as in various industries, He has extensive experience in various IT segments and is a strong advocate for open source technologies. Brandon holds many industry certifications, loves the outdoors and spending time with family. Also, he goes through the effort of testing and troubleshooting issues, so you don't have to.

Related Articles

4 Comments

    1. When you enable the modules in Ceph this should be able to happen on any of the nodes in the cluster and the others are aware. Hopefully this helps. I will add a note in the post.

      Thanks again,
      Brandon

  1. This command wouldn’t work with or without sudo.
    echo -n “password” > /var/snap/microceph/current/conf/password.txt

    This command needs to point to the correct directory path.
    microceph.ceph dashboard ac-user-create -i /etc/ceph/password.txt admin administrator

    I just manually created the password file at that location and ran:
    sudo microceph.ceph dashboard ac-user-create -i /var/snap/microceph/current/conf/password.txt admin administrator

    1. Hi Ray, thank you for the comment! I believe Microceph installs to the /var/snap/microceph/current/conf location and not the /etc/ceph location. From my testing, full blown Ceph manually installed gets placed in the /etc/ceph location whereas microceph installs to /var/snap. However, in this case, the directory shouldn’t matter on the password as it is just pulling from the text file to create the user.

      Brandon

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.