We use HP Proliant HD 380 gen 10 that uses infamous Broadcom NIC - Nextreme BCM5719 PCI-Gigabit Ethernet controller.
There seemed to be a problem with VMQ, which I think, I have managed to resolve by following this guide -
https://www.reddit.com/r/sysadmin/comments/2k7jn5/after_2_years_i_have_finally_solved_my_slow/
On this particular server, we are using windows server 2016 with the Hyper-v role enabled.
In my configuration I have:
- an External hyper-v switch, which is the most used of two. Most of the traffic goes through it.
- an internal hyper-v switch - just for testing purposes. Not really important, but anyway I would gladly heed your advice.
So my question is - how do I properly set the settings for the maximum performance of my virtual network.
So I have three network interfaces:
- physical one (on the host)
- Virtual switch (on the host)
- Microsoft Hyper-V Network Adapter (on the guest system)
physical nic (on the host)
802.3az EEE - disabled
ARP Offload - enabled / disabled
EEE Control Policies - Maximum performance
Ethernet @WireSpeed - enabled / disabled
Flow Control - Auto-Negotiation / Rx & Tx Enabled / disabled
Interrupt Moderation - enabled / disabled
Jumbo Mtu - 9014, 4088, 1514
Large send offload version 2 (IPv4) - enabled/disabled
Large send offload version 2 (IPv6) - enabled/disabled
Locally Administred Address
Maximum number of RSS Queues - 1-4 Queues
NS Offload - enabled/disabled
Priority & VLAN - Priority & VLAN Enabled
Receive Buffer - text field, the default value 200
Receive Side scaling - enabled / disabled
Speed & Duplex - Auto Negotiation - 10 -1000 Mb Full/Half duplex
TCP/UDP Checksum Offload (IPv4) - Rx & Tx enabled / disabled
TCP/UDP Checksum Offload (IPv6) - Rx & Tx enabled / disabled
Transmit Buffers - 500
Virtual Machine Queues - Enabled
Vlan id - text field 0
VMQ Vlan Filtering - disabled
Wake on Magic Packet - enabled
Wake on pattern match - enabled
WOL speed - lowest speed advertise, auto, 10 mb vs 100 mb
Virtual switch (on the host)
IPSec Offload - Auth Header and ESP Enabled/disabled
IPv4 Checksum Offload - Rx&Tx Enabled /disabled
Jumbo Packet - 9014 bytes/4088 bytes/ disabled
Large send offload version 2 (IPv4) - enabled/disabled
Large send offload version 2 (IPv6) - enabled/disabled
Network Address - blank
Network Direct (RDMA) - enabled/disabled
Receive Side Scaling - enabled / disabled
TCP Checksum Offload (IPv4) - Rx & Tx enabled / disabled
TCP Checksum Offload (IPv6) - Rx & Tx enabled / disabled
UDP Checksum Offload (IPv4) - Rx & Tx enabled / disabled
UDP Checksum Offload (IPv6) - Rx & Tx enabled / disabled
Microsoft Hyper-V Network Adapter (on the guest)
Has the following options:
Forwarding optimization - enabled/disabled
Hyper-V Network Adapter Name - blank
IPSec Offload - Auth Header and ESP Enabled/disabled
IPv4 Checksum Offload - Rx&Tx Enabled /disabled
Jumbo Packet - 9014 bytes/4088 bytes/ disabled
Large send offload version 2 (IPv4) - enabled/disabled
Large send offload version 2 (IPv6) - enabled/disabled
Max Number of RSS Processors - 2-16 processors
Maximum number of RSS Queues - 2-16 Queues
Maximum RSS Processor Number - text field
Network Address - blank
Network Direct (RDMA) - enabled/disabled
Packet Direct - enabled/disabled
Receive Buffer Size - 1 - 16 MB
Receive Side Scaling - enabled / disabled
Recv Segment Coalescing (IPv4) - enabled / disabled
Recv Segment Coalescing (IPv6) - enabled / disabled
RSS Base processor Number - text field, the current value is 4
RSS Profile - NUMA Scaling Static, Numa Scaling, Closet Processor
Send Buffer Size - 1-128 MB
TCP Checksum Offload (IPv4) - Rx & Tx enabled / disabled
TCP Checksum Offload (IPv6) - Rx & Tx enabled / disabled
UDP Checksum Offload (IPv4) - Rx & Tx enabled / disabled
UDP Checksum Offload (IPv6) - Rx & Tx enabled / disabled
VLAN ID - text field, the current value is 0
How do they correlate with each other?
What are the best options?