I haven't yet run ESXi on any machine with P+E cores. ESXi's scheduler, being unaware of the different cores, is going to treat them the same when assigning them to a workload. I don't see how this is a good thing.
I realize the "thing to do" is to tell ESXi to ignore differences in cores at boot time but, I think, disabling efficiency cores in the machine's BIOS and letting ESXi schedule using only performance cores may be a better idea, depending on workload and whether any consistency is desired across application instances.
Example: we compute Pi to 100,000 digits in a VM running on an Alder Lake Core i9 with 24 cores (P+E). The computation takes, say, 7 minutes. We do it a second time and it takes 12 minutes, because this time, ESXi has given the VM more efficiency cores during more scheduling instances. Just by chance. There must be some workloads that you don't want to schedule against efficiency cores. Ever.
As an aside, the Alder Lake N100 in the little GMKtec mini that I've been playing with has only efficiency cores. So that eliminates issues with different core types. ESXi runs just fine on it without any boot string requirements. Only the TPM is not usable (no VIB for it, it seems).
As another aside (can you have too many asides?), my desktop PC happens to be the above described Alder Lake Core i9 with 24 cores. It's an Intel NUC Extreme, running Windows 11. I don't think it'll ever be running ESXi, even when eventually retired from desktop use.
The MS-01 looks nice. But for its price, I can pick up two Dell PowerEdge R730 systems, which we know run ESXi really, really well. Just sayin'.