I have had several questions about NVMe memory tiering since writing several posts about it and featuring this on YouTube videos as of recently. I wanted to address some of those here as I think it will be helpful to share my findings with using NVMe memory tiering now for a few weeks. Hopefully this will help you decide if you want to play around with it or not.
Can I use this in versions prior to vSphere 8.0 Update 3?
No, you will have to be upgraded to vSphere 8.0 Update 3 ESXi to use the feature.
Do you have to have vCenter Server to use memory tiering?
No, you can turn this on with standalone ESXi hosts and it doesn't require vCenter Server to make use of it.
Isn't this just paging like we have seen in Linux for years?
No, evidently not. VMware is using lots of logic underneath the hood to place memory pages more intelligently than simple memory paging to reduce page faults and many other benefits. It can decide which pages need to be in the quickest DRAM and which ones can exist on NVMe disk.
Does it require a special license?
It is currently a tech preview so there is no license needed to play around with this in the home lab if you have a VMUG subscription, etc. I suspect this will be tied to the new VVF or VCF licensing structure.
Does NVMe memory tiering require a whole NVMe drive?
Yes it does. You basically tell NVMe tiering which drive you want to use for this purpose and it sets it up as such. So you can't use just a part of the drive with a partition, etc. It will create its own partition structure on the drive for this purpose.
Can you have more than one NVMe tiering drive?
Yes, I inadvertently setup two drives not realizing it as I mistakenly copied and pasted the wrong drive ID and already had a drive allocated. Both of them were marked for NVMe memory tiering. I am not exactly sure how it is used when more than one is configured. Possibly for redundancy.
Are there limitations when using the NVMe memory tiering feature?
Yes there are. There are a few that I would like to mention below:
- You can't use storage migration - When attempting to migrate VMs from one datastore to another, I noticed the storage migration fails. When I disabled NVMe tiering for the host, the storage migration was successful. This has a limitation I believe due to snapshots which I will mention below
- You can't use "with memory" snapshots - You can only capture snapshots without the memory option. This is currently a limitation of NVMe memory tiering and I suspect it is the reason storage migrations fail
- You can't do nested virtualization - You can't setup a VM on a memory tiering-enabled host to run a VM with nested virtualization enabled
I am curious if any of you have tried/are trying out NVMe memory tiering. Have you discovered any limitations to note outside of the list above? @jnew1213 I know you had mentioned you were trying this. Have you run into any "gotchas" so far?