Hyper-V Networking Tuning

Lanky Doodle 241 Reputation points
2023-09-08T16:04:45.5966667+00:00

Hi,

I'm in the process of building a new Server 2019-based Hyper-V Cluster (well, multiple actually). We're using a 4-port LACP LBFO team made up of 10Gb optical NICs. As part of testing we're moving VMs from one Hyper-V server to another (prior to configuring clustering).

Right now, performance of that sucks. According to Task Manager, the interface used for VM Move is maxing at about 900Mbps (yes, Mbps!), and the Throughput 'scale' on the graph maxes out at 1Gbps.

According to Windows NIC Teaming, we're getting 40Gbps in the Team.

In my earlier days of Hyper-V (back in 2008/2008 R2) there was some guidance on changing the physical NIC settings on each node, like disabling RSS, disabling LSO, disabling Flow Control, disabling VMQ only on adapters of 1Gb or lower, etc. etc.

So I just wanted to check what the current guidance is on the NIC properties. We have a mixture of vendor servers (not planned to be in the same cluster) so we have: Broadcom NICs, Intel NICs and Mellanox NICs.

Right now the only ones we're configuring are:

Flow Control: Disabled
Jumbo Packet: 9014
Speed and Duplex: 10Gb Full Duplex (not leaving at auto)
VMQ: Enabled (since they're 10Gb NICs)

Thanks

Windows for business Windows Client for IT Pros Storage high availability Virtualization and Hyper-V
Windows for business Windows Server User experience Other
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. Limitless Technology 44,746 Reputation points
    2023-09-11T15:15:55.3333333+00:00

    Hello,

    Usually, the underperformance of NIC Teaming in transfer operations cna be fixed by checking these parameters:

    • Ensure that the interface properties are configured correctly: In the adapter properties ensure that it is configured correctly for 10Gbps FULL and not on 1Gbps
    • Check intermediary network hardware. Different vendors may suppose different settings and properties. Also some equiment may be Fiber-to-Fiber and other cases will have intermediary copper cable links, which will limit the speed to 1Gbps in best case scenario.
    • You mentioned VMQ is enabled, which is usually recommended for 10Gb NICs. However, it's important to ensure that your NICs and drivers fully support VMQ and are properly configured. Sometimes, certain combinations of NICs and drivers may have compatibility issues with VMQ. Monitor performance with VMQ both enabled and disabled to see if there's any difference.
    • You've mentioned Jumbo Frames (9014 MTU), which can improve performance in certain scenarios. Ensure that the MTU size is consistent across your entire network path, including switches and storage devices.

    --If the reply is helpful, please Upvote and Accept as answer--

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.