Windows share from ...
 
Notifications
Clear all

Windows share from a Proxmox HA Ceph Cluster

16 Posts
2 Users
2 Reactions
949 Views
wikemanuel
Posts: 6
(@wikemanuel)
Active Member
Joined: 11 months ago

@brandon-lee this is all really great stuff. I also ran into this video that appears to show that a ceph fs can be accessed directly from Windows via the use of Ceph Dokan drivers:

Reply
1 Reply
Brandon Lee
Admin
(@brandon-lee)
Joined: 14 years ago

Member
Posts: 395

@wikemanuel Nice! that is super cool....will have to try this out myself 👍

Reply
wikemanuel
Posts: 6
(@wikemanuel)
Active Member
Joined: 11 months ago

@brandon-lee it would be cool for you to do a similar video.
They say there is no such thing as a stupid question but let me try ... Smile
In your video "Proxmox 8 Cluster with Ceph Storage configuration" you create a 3 node pmox cluster, with a 53.69GB disc for each node. I suspect the total useable pool size is only 53.69GB (a 3-way mirror if you will)... what happens if you add another node? 4-way mirror or something else? What happens if you add an additional disc to a node but not all nodes (does the pool grow)? All these questions are leading up to understanding how ceph grows not only in performance (additional nodes) but also in useable pool space.ย 

Reply
1 Reply
Brandon Lee
Admin
(@brandon-lee)
Joined: 14 years ago

Member
Posts: 395

@wikemanuel I am setting up my lab to confirm my thoughts on your questions and I will post back soon. stay tuned.

Reply
wikemanuel
Posts: 6
(@wikemanuel)
Active Member
Joined: 11 months ago

@brandon-lee not all hero's wear capes! Thank you for doing this... my continued research points to storage efficiency settings 3x replica (33% efficiency) vs. erasure coding (66% for 3 node cluster with 1 failure tolerance) or as high as 71.5% efficiency in larger 14 node clusters with 4 failure tolerance. In summary native ceph 33% efficiency does not excite me for my home lab storage pool solution... but 66%+ with erasure coding might be the cat's meowย 

Reply
1 Reply
Brandon Lee
Admin
(@brandon-lee)
Joined: 14 years ago

Member
Posts: 395

@wikemanuel No worries, ok so I will post answers as I work through them. The first is simply adding an additional disk to one of the hosts.ย 

I grabbed some screenshots of a 3 node cluster that I built, each with a single disk (much like my blog post). The pool was around 150 GB. I added one disk on one of the nodes, and as you can see at the end, the pool size is near 200 GB, so it does add space to the pool.

2024 01 05 13 11 52
2024 01 05 13 14 13
2024 01 05 13 15 00
2024 01 05 13 15 27
2024 01 05 13 15 57
2024 01 05 13 17 07
Reply
Brandon Lee
Posts: 395
Admin
Topic starter
(@brandon-lee)
Member
Joined: 14 years ago

@wikemanuel This is after adding a 4th host to the cluster. You can see the Pool size is growing as expected. This is immediate....pretty cool.

2024 01 05 14 55 52
2024 01 05 14 56 19
Reply
wikemanuel
Posts: 6
(@wikemanuel)
Active Member
Joined: 11 months ago

This is all super useful! If I am reading this all correctly; that 249.98GB on the end is 4 nodes each with a ~50GB disc and 1 of the nodes using 2 of the 50GB discs... so I am assuming it is showing you total available (similar to a stripe) but once you implement some sort of redundancy the total useable will be lower no? It would be mine boggling to think it adds drives up like a stripe and manages to have redundancy. I would assume 33% or 66% useable of the 249.98GB depending on if you do or do not use erasure coding.

Reply
1 Reply
Brandon Lee
Admin
(@brandon-lee)
Joined: 14 years ago

Member
Posts: 395

@wikemanuel Actually I should have clarified that screenshot above...this is the RAW storage that is displayed by Ceph on the Ceph dashboard. It does increase your total capacity to add disks and nodes. However, when I started with the 3 node configuration the Ceph configuration was set as the 3/2 configuration:

2024 01 05 11 43 23

This results in only seeing 33% of the space available from the host perspective (doing the math 249.98 GB/ 3 = 83 GB roughly which you see on the hosts side.

2024 01 05 21 53 33

ย 

Reply
Page 2 / 3