Windows share from ...
 
Notifications
Clear all

Windows share from a Proxmox HA Ceph Cluster

16 Posts
2 Users
2 Reactions
746 Views
Brandon Lee
(@brandon-lee)
Posts: 341
Member Admin
Topic starter
 

Creating a new forum topic here. I had this question from a reader/viewer:

The question I have is what would be the next step to sharing this "storage pool" to windows (guessing via SMB).
Does Proxmox have native support for ceph fs , does that allow for a large smb share?

My end goal is to have an "elastic" RAID pool for personal plex media storage.

 
Posted : 04/01/2024 10:43 am
wikemanuel reacted
Brandon Lee
(@brandon-lee)
Posts: 341
Member Admin
Topic starter
 

@wikemanuel welcome to the forums! Ok, so it looks like using CephFS you can share out a directory to clients.

Take note of these official forum responses from Proxmox:

This first post makes it sound like Proxmox recommends simply creating a VM on top of your Ceph storage as normal and create your SMB/Samba share from the VM. This route would probably eliminate some complexities of exposing it directly from Ceph:

https://forum.proxmox.com/threads/cephfs-usage-as-samba-or-something.81373/

This looks to be a walkthrough of creating Ceph storage to expose the storage directly:
https://forum.proxmox.com/threads/shared-storage-for-vms-on-cephfs.109494/

I would probably venture to say that it might be best to take the VM route as you could then just backup your VM with your attached storage using Proxmox Backup server to make sure everything is safe. Otherwise, you could probably rsync the directory from CephFS to a storage location outside of Proxmox.

Let me know if this makes sense?

 
Posted : 04/01/2024 11:16 am
wikemanuel
(@wikemanuel)
Posts: 6
Active Member
 

@brandon-lee thank you for the two links. The first option of creating a VM on top of Ceph and subsequent pool from inside the VM seems easiest. The storage pool I am trying to create spans 3 servers and a total of 168TB of disk storage. I may be out of my knowledge realm when saying this, but, it is my understanding that there is a "drive size limitation" in the VM environment when carving out storage (something like 2TB per volume). The goal is to have a single mountpoint for all 168TB and not several drives. Is there a particular type of VM/share combo I should look at using?

 
Posted : 04/01/2024 11:42 am
Brandon Lee
(@brandon-lee)
Posts: 341
Member Admin
Topic starter
 

Ok so there are several layers to think about here. First is the layer assigned to Proxmox. It looks like Proxmox has no issues with very large volumes:

According to this post you can assign these very large volumes to a VMs:

https://forum.proxmox.com/threads/maximun-size-of-volume-or-disk-that-can-be-assign-to-proxmox.7300/

For 32-bit CPUs on 2.6 kernels, the maximum LV size is 16TB.
For 64-bit CPUs on 2.6 kernels, the maximum LV size is 8EB. (Yes, that is a very large number.)
For 2.4 based kernels, the maximum LV size is 2TB.

Also, on the operating system side, you would need to consider the configuration there as well. It looks like in Windows 2019, NTFS can support an 8 PB volume with 2048 KB cluster size. Note the documentation here:

https://learn.microsoft.com/en-us/windows-server/storage/file-server/ntfs-overview

 
Posted : 04/01/2024 12:50 pm
Brandon Lee
(@brandon-lee)
Posts: 341
Member Admin
Topic starter
 

Also, here is a pretty cool discussion from 45 drives on really large Ceph storage:

 
Posted : 04/01/2024 1:06 pm
wikemanuel
(@wikemanuel)
Posts: 6
Active Member
 

@brandon-lee this is all really great stuff. I also ran into this video that appears to show that a ceph fs can be accessed directly from Windows via the use of Ceph Dokan drivers:

 
Posted : 04/01/2024 4:44 pm
Brandon Lee
(@brandon-lee)
Posts: 341
Member Admin
Topic starter
 

@wikemanuel Nice! that is super cool....will have to try this out myself 👍

 
Posted : 04/01/2024 11:40 pm
wikemanuel
(@wikemanuel)
Posts: 6
Active Member
 

@brandon-lee it would be cool for you to do a similar video.
They say there is no such thing as a stupid question but let me try ... Smile
In your video "Proxmox 8 Cluster with Ceph Storage configuration" you create a 3 node pmox cluster, with a 53.69GB disc for each node. I suspect the total useable pool size is only 53.69GB (a 3-way mirror if you will)... what happens if you add another node? 4-way mirror or something else? What happens if you add an additional disc to a node but not all nodes (does the pool grow)? All these questions are leading up to understanding how ceph grows not only in performance (additional nodes) but also in useable pool space. 

 
Posted : 05/01/2024 8:51 am
Brandon Lee
(@brandon-lee)
Posts: 341
Member Admin
Topic starter
 

@wikemanuel I am setting up my lab to confirm my thoughts on your questions and I will post back soon. stay tuned.

 
Posted : 05/01/2024 1:07 pm
wikemanuel
(@wikemanuel)
Posts: 6
Active Member
 

@brandon-lee not all hero's wear capes! Thank you for doing this... my continued research points to storage efficiency settings 3x replica (33% efficiency) vs. erasure coding (66% for 3 node cluster with 1 failure tolerance) or as high as 71.5% efficiency in larger 14 node clusters with 4 failure tolerance. In summary native ceph 33% efficiency does not excite me for my home lab storage pool solution... but 66%+ with erasure coding might be the cat's meow 

 
Posted : 05/01/2024 1:29 pm
Brandon Lee
(@brandon-lee)
Posts: 341
Member Admin
Topic starter
 

@wikemanuel No worries, ok so I will post answers as I work through them. The first is simply adding an additional disk to one of the hosts. 

I grabbed some screenshots of a 3 node cluster that I built, each with a single disk (much like my blog post). The pool was around 150 GB. I added one disk on one of the nodes, and as you can see at the end, the pool size is near 200 GB, so it does add space to the pool.

2024 01 05 13 11 52
2024 01 05 13 14 13
2024 01 05 13 15 00
2024 01 05 13 15 27
2024 01 05 13 15 57
2024 01 05 13 17 07
 
Posted : 05/01/2024 2:59 pm
Brandon Lee
(@brandon-lee)
Posts: 341
Member Admin
Topic starter
 

@wikemanuel This is after adding a 4th host to the cluster. You can see the Pool size is growing as expected. This is immediate....pretty cool.

2024 01 05 14 55 52
2024 01 05 14 56 19
 
Posted : 05/01/2024 3:58 pm
wikemanuel
(@wikemanuel)
Posts: 6
Active Member
 

This is all super useful! If I am reading this all correctly; that 249.98GB on the end is 4 nodes each with a ~50GB disc and 1 of the nodes using 2 of the 50GB discs... so I am assuming it is showing you total available (similar to a stripe) but once you implement some sort of redundancy the total useable will be lower no? It would be mine boggling to think it adds drives up like a stripe and manages to have redundancy. I would assume 33% or 66% useable of the 249.98GB depending on if you do or do not use erasure coding.

 
Posted : 05/01/2024 5:07 pm
Brandon Lee
(@brandon-lee)
Posts: 341
Member Admin
Topic starter
 

@wikemanuel Actually I should have clarified that screenshot above...this is the RAW storage that is displayed by Ceph on the Ceph dashboard. It does increase your total capacity to add disks and nodes. However, when I started with the 3 node configuration the Ceph configuration was set as the 3/2 configuration:

2024 01 05 11 43 23

This results in only seeing 33% of the space available from the host perspective (doing the math 249.98 GB/ 3 = 83 GB roughly which you see on the hosts side.

2024 01 05 21 53 33

 

 
Posted : 05/01/2024 11:00 pm
Brandon Lee
(@brandon-lee)
Posts: 341
Member Admin
Topic starter
 

@wikemanuel CephFS is awesome from Proxmox. Been playing around with this on top of the Ceph storage and was able to successfully mount my Ceph storage running on Proxmox on both Linux and Windows. Blog post will be forthcoming:

CephFS mounted in Linux
CephFS mounted in Windows
 
Posted : 06/01/2024 9:34 pm
wikemanuel reacted
Page 1 / 2