VMware NVMe Memory ...
 
Share:
Notifications
Clear all

VMware NVMe Memory Tiering Questions Answered (Memory over NVMe)

Page 2 / 2

Posts: 3
(@fishtacos)
New Member
Joined: 2 days ago

3x nodes cluster with 3.2TB NVME PCIe slot cards each. 1x HGST sn100, 2x F320 Oracle. Low power homelab, nothing crazy.

The F320s with updated firmware and V-NAND NAND, the SN100 I just plugged in and forgot about it.

Specs are mid-tier speed wise, endurancewise the F320s are rated for 29 PBW and the HGST for 17 PBW. Likewse, sequential read/write on the HGST are half of the F320s... but also bought it for ~90 USD all the way from Israel to the US, so happy with price/perf. In other words, F320s are 5DWPD, SN100 is 3DWPD.

I'm using an unsupported hack on an unsupported tech preview feature by running a 256GB partition chunk for tiering storage (thanks William lam!) and the remainder for local datastore storage. Namespaces would've been nice, but the Oracle F320s don't support it in firmware and the HGST I haven't looked at.

If that hack stops working (the partition one), i'll just take a risk with the 512GB SN740s (300 TBW) in each host and dedicate those to tiering. Definitely not a good idea comparatively.

This is really exciting tech given it works on low-tier hardware, not specialized CXL setups.

You have a wonderfully informative website and I have spent many, many hours reading, absorbing, be inspired by, applying and replicating your projects. Big fan!

Reply
Page 2 / 2