Software Defined Storage

Try Microceph for an Easy Ceph Install

Learn about Microceph which is the easiest way to build a Ceph cluster for Microk8s or general software-defined storage installation

One of the software-defined storage solutions gaining popularity in the enterprise and home lab community is Ceph. Ceph is a powerful storage solution and has many great benefits. However, it can be intimidating to configure and manage if you haven’t had any experience with it before. Microceph is a Ceph implementation that simplifies the deployment and management of a Ceph cluster. Let’s see how we can use Microceph for an easy Ceph deployment.

What is Microceph?

First off, what is Microceph? You might have seen it in connection with microk8s. However, it is also a general storage. It is a self-proclaimed “easiest way to get up and running with Ceph” solution. You can read the official documentation around Microceph here: MicroCeph documentation. It is also self-proclaimed as an opinionated solution that is focused on the small scale, compared tot he ceph common package.

However, the TLDR; of the solution is it is a lightweight version of Ceph that still has most of the benefits of full-blow Ceph, but is easier to install and manage. Its focus is more on the experience of Ceph administrators and storage software developers.

It simplifies key distribution, service placement, and disk administration. This applies to clusters that span private clouds, edge clouds, as well as home lab environments.

Note the following features and benefits:

  • Quick deployment and minimal overhead
  • Single-command line operations (for bootstrapping, adding OSD disks, service enablement, etc)
  • It isolates the services from the host and is upgrade-friendly
  • It has built-in clustering so you don’t have to worry about those details

Requirements

There aren’t many requirements outside of the following:

  • You need to have a minimum of (3) OSD disks for a proper Ceph cluster (outside of playing around with a single-node cluster
  • This means you will need (3) different nodes contributing (1) OSD disk each

Microceph single-node installation

The really cool thing about Ceph clusters with microceph is that you can run it in a single-node configuration or a multi-node configuration. Both of these are great for learning and beginning to understand Ceph setup and microceph clusters.

Let’s look at the single node deployment. You can install a single node with the following steps:

sudo snap install microceph
Installing microceph
Installing microceph

The Microceph documentation notes that for most, you will want to place a hold on automatic updates for microceph since there can always be unintended consequences for updating your Ceph cluster. This will give you the opportunity to read through the release notes of future releases and make sure there are not changes that will affect your storage. Note the following microceph snap command to do that:

sudo snap refresh --hold microceph
Holding the microceph package from updates
Holding the microceph package from updates

Next we bootstrap the cluster with the following cluster bootstrap command. This initializes Ceph:

sudo microceph cluster bootstrap

You can then take a look at the microceph status with the following command:

sudo microceph status
Bootstrapping the microceph cluster
Bootstrapping the microceph cluster

In the above command, we can see the status of the single node ceph. Now, we can add our disk and disk partitions for microceph.

sudo microceph disk add <path to disk> --wipe
Wiping the microceph disk to be joined
Wiping the microceph disk to be joined

Installing Microceph multi-node cluster

Next, let’s install the microceph multi-node cluster. This will use at least (3) multi-node cluster hosts and (1) disk at least from each. Multiple disks are recommended for best performance and efficiency. We will start by running the following commands on all three nodes to get microceph installed and the package on hold for updates:

sudo snap install microceph
sudo snap refresh --hold microceph

Then on the first node, we will run the cluster bootstrap command:

sudo microceph cluster bootstrap

Creating the Ceph join token

Then from the first node only, we will issue the command to add the other two nodes. Note you can’t use the same join token for both nodes. These are unique to the node you are adding.

sudo microceph cluster add <node 2>
sudo microceph cluster add <node 3>
Adding nodes to the ceph cluster
Adding nodes to the ceph cluster

Joining the Ceph cluster

Then on the other two nodes that will be joining, you issue the command on your other nodes:

sudo microceph cluster join <join token>
Joining cluster on 2nd node
Joining cluster on 2nd node

Below, running the same command on the third node.

Joining cluster on 3rd node
Joining cluster on 3rd node

On all three nodes, you run the following command to add your microceph disk to the microceph cluster:

sudo microceph disk add <path to disk> --wipe
Wiping the microceph disk to be joined
Wiping the microceph disk to be joined
Wiping the disk on the third node for microceph
Wiping the disk on the third node for microceph

Check the status of Ceph

Now, we can check the status of the microceph cluster, including MDS, MGR, MON, and OSD services:

sudo microceph status
Checking the status of microceph
Checking the status of microceph

You can also use the status command:

sudo ceph status
Checking the microceph status using the ceph command
Checking the microceph status using the ceph command

Best Practices for Deploying Microceph

Below are a few best practices to keep in mind when deploying Ceph:

  • Make sure you are meeting the minimum Ceph cluster requirements, including three OSDs
  • Use unpartitioned disks since Microceph does not support partitioned disks
  • Make sure you have high-speed networks connecting cluster nodes, recommended at least 10 GbE
  • You can use Cephadm to create the cluster if you want to use NFS backing storage
    • Microceph doesnโ€™t support NFS, but Cephadm does
  • Use a distributed SQLite store for distributed access to SQLlite DBs as an example of making database solutions aware of distributed storage across cluster nodes

Wrapping up

In case you have wanted to spin up a modern deployment of software-defined storage using a Ceph cluster and weren’t sure about all the steps, the Microceph snap command takes the heavy lifting out of the process. It also allows you to have block devices and file-level storage without the maintenance overhead of a traditional Ceph cluster.

With GlusterFS and other storage types being deprecated, Ceph is the go-to storage of choice for those who want to have a resilient software-defined storage solution and using a storage solution that is alive and well and fully supported. I like the fact that Ceph is multi-purpose as well, as you can use it for block and file-level storage. Many will recognize Ceph from Proxmox as it has native Ceph integration with Proxmox that allows you to easily create a Ceph cluster on top of your Proxmox cluster hosts for easy shared storage without the need for external storage.

Subscribe to VirtualizationHowto via Email ๐Ÿ””

Enter your email address to subscribe to this blog and receive notifications of new posts by email.



Brandon Lee

Brandon Lee is the Senior Writer, Engineer and owner at Virtualizationhowto.com, and a 7-time VMware vExpert, with over two decades of experience in Information Technology. Having worked for numerous Fortune 500 companies as well as in various industries, He has extensive experience in various IT segments and is a strong advocate for open source technologies. Brandon holds many industry certifications, loves the outdoors and spending time with family. Also, he goes through the effort of testing and troubleshooting issues, so you don't have to.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.