Hey Everyone,
I am building a 4-node cluster with Windows Server 2022, Hyper-V, and Storage Spaces Direct. Everything is going well except for storage performance for guest VMs, it is performing much slower than expected. In general, the Hyper-V hosts see ~10x the performance of the Hyper-V guest.
Hyper-V Hosts:
- Model: Dell 740XD2
- RAM: 768 GB
- CPU: 64 cores (2 sockets x 32 cores)
- Storage: 2x 256GB SAS SSD (OS), 4x 1.6TB NVME SSD, 16x 3.2TB NVME SSD
- Network: 2x Broadcom 25Gb, Cisco ACI
Performance Numbers:
Computer ReadMiBSec WriteMiBSec ReadIOPS WriteIOPS Hardware Storage
HVGuest21 60 18 7,654 2,297 HV VM S2D - volume21 <The Problem>
HVHost21 636 190 81,353 24,338 Dell S2D - volume21
HVHost22 616 184 78,899 23,603 Dell S2D - volume22
HVHost23 509 153 65,215 19,524 Dell S2D - volume23
HVHost24 606 181 77,575 23,210 Dell S2D - volume24
- Performance data was generated using diskSpd.exe
- Command:
diskspd.exe -b8k -d30 -o4 -t8 -h -r -w23 -L -Z1G -c20G C:\ClusterStorage\<hvhostxx>\DiskSpd.dat
- Host storage is aligned - there is one virtual disk per HV host, each virtual disk is assigned to the corresponding host, and each host uses the corresponding disk for testing
- The HV Guest VM storage is aligned also
- If the host and storage are un-aligned, performance metrics drop to ~50% for the HV hosts and the HV guest
- There is minimal load on the HV hosts, with 2 idle test VMs per host.
I know it's not apples to apples but in our VMWare environment our guest VM throughput and IOPS numbers are ~3x what we see with Hyper-V / S2D using VSAN or NetApp/NFS, identical underlying servers and networking, and the VMWare servers are hosting ~60 VMs each.
Questions
- Is it typical to see this much difference in performance between the host and the guest?
- Can I do something to improve the storage performance of my guest VMs?
I hope I'm just missing something. Any suggestions would be appreciated.
Thanks,
Rob