We have a two node Hyper-V converged cluster running Windows Server Datacenter 2019 that doesn't appear to be using it's storage network. Disk IO is very, very slow when virtual machines perform writes.
- Nodes: 2 x Dell R740XD ReadyNodes
- Network: each node has 2 x 10Gb Broadcom NetXtreme E-Series interfaces connected to upstream core switch via DACs. Also has 2 x 25Gb QLogic FastLinQ QL41262-DE adapters direct connected via DACs to the other server in the pair for a storage network.
Troubleshooting to date:
- Functionally the nodes are working correctly apart from the poor I/O. All virtual machines are working correctly and migrate successfully between nodes.
- Full network connectivity between nodes (management interfaces and storage interfaces can ping each other)
- Test-RDMA.PS1 run on the storage interfaces passes using iWARP for both interfaces.
- All servers running latest Windows and Dell firmware/driver updates.
- "Get-SmbClientNetworkInterface" shows both storage interfaces with both "RSS Capable" and "RDMA Capable" as true. All other interfaces show "false" for both.
- Counters and SNMP show very low usage of the storage interfaces (100kB/s) even when the nodes are under heavy I/O, but high network throughput through the 10Gb host interfaces.
Would appreciate any hints as to how to troubleshoot this further to get the nodes to use the storage interfaces for storage traffic.