Windows Server 2025 Datacenter - LBFO is failing/deprecated - no full link speed with New-VMSwitch command

FAMO1403 5 Reputation points
2025-03-28T05:23:50.7266667+00:00

Name: Microsoft Windows Server 2025 Datacenter OS Version: 10.0.26100 N/A Build 26100

LBFO is deprecated with Windows Server 2025. Instead of using a traditional NIC teaming, I created the Hyper-V vSwitch with the following command:

New-VMSwitch -Name "vSwitch" -NetAdapterName "SLOT 6 Port 1","SLOT 6 Port 2","SLOT 6 Port 3","SLOT 6 Port 4" -EnableEmbeddedTeaming $true -AllowManagementOS $true

So far, everything is fine; by default, the Load Balancing mode is set to HyperVPort. However, even though I have 4x 1 Gbps connections, it shows only a LinkSpeed of 1 Gbps. When I choose Dynamic as the LoadBalancingAlgorithm, it shows a LinkSpeed of 4 Gbps:

PS C:\Users\Administrator> Set-VMSwitchTeam -Name "vSwitch" -LoadBalancingAlgorithm Dynamic
PS C:\Users\Administrator> Get-NetAdapter | Sort-Object Name

Name                      InterfaceDescription                    ifIndex Status       MacAddress             LinkSpeed
----                      --------------------                    ------- ------       ----------             ---------
Embedded NIC 1            Broadcom NetXtreme Gigabit Ethernet #2        6 Disconnected C4-CB-E1-E3-82-A8          0 bps
Embedded NIC 2            Broadcom NetXtreme Gigabit Ethernet #5       11 Disconnected C4-CB-E1-E3-82-A9          0 bps
SLOT 6 Port 1             Broadcom NetXtreme Gigabit Ethernet           2 Up           8C-84-74-0A-38-E8         1 Gbps
SLOT 6 Port 2             Broadcom NetXtreme Gigabit Ethernet #4        9 Up           8C-84-74-0A-38-E9         1 Gbps
SLOT 6 Port 3             Broadcom NetXtreme Gigabit Ethernet #6       13 Up           8C-84-74-0A-38-EA         1 Gbps
SLOT 6 Port 4             Broadcom NetXtreme Gigabit Ethernet #3       14 Up           8C-84-74-0A-38-EB         1 Gbps
vEthernet (vSwitch)       Hyper-V Virtual Ethernet Adapter              8 Up           8C-84-74-0A-38-E8         4 Gbps

When I change the LoadBalancingAlgorithm back to HyperVPort, it shows only a LinkSpeed of 1 Gbps again:

mathematicaKopierenPS C:\Users\Administrator> Set-VMSwitchTeam -Name "vSwitch" -LoadBalancingAlgorithm HyperVPort
PS C:\Users\Administrator> Get-NetAdapter | Sort-Object Name

Name                      InterfaceDescription                    ifIndex Status       MacAddress             LinkSpeed
----                      --------------------                    ------- ------       ----------             ---------
Embedded NIC 1            Broadcom NetXtreme Gigabit Ethernet #2        6 Disconnected C4-CB-E1-E3-82-A8          0 bps
Embedded NIC 2            Broadcom NetXtreme Gigabit Ethernet #5       11 Disconnected C4-CB-E1-E3-82-A9          0 bps
SLOT 6 Port 1             Broadcom NetXtreme Gigabit Ethernet           2 Up           8C-84-74-0A-38-E8         1 Gbps
SLOT 6 Port 2             Broadcom NetXtreme Gigabit Ethernet #4        9 Up           8C-84-74-0A-38-E9         1 Gbps
SLOT 6 Port 3             Broadcom NetXtreme Gigabit Ethernet #6       13 Up           8C-84-74-0A-38-EA         1 Gbps
SLOT 6 Port 4             Broadcom NetXtreme Gigabit Ethernet #3       14 Up           8C-84-74-0A-38-EB         1 Gbps
vEthernet (vSwitch)       Hyper-V Virtual Ethernet Adapter              8 Up           8C-84-74-0A-38-E8         1 Gbps

Is there a way in Windows Server 2025 to use the HyperVPort LoadBalancingAlgorithm in combination with the full bandwidth of the NIC team? If not can for example 4 virtual machines each use 1 Gbps simultaneously or do all 4 virtual machines share a total of 1 Gbps if the LoadBalancingAlgorithm is HyperVPort?

Windows for business | Windows Server | Networking | Software-defined networking
0 comments No comments
{count} vote

1 answer

Sort by: Most helpful
  1. Anonymous
    2025-03-28T07:22:50.16+00:00

    Hello,

    Thank you for posting in Q&A forum.

    The link speed displayed depends on the load balancing algorithm used. When using the Hyper-V Port load balancing algorithm, a vNIC is affinitized to a single pNIC, and the link speed of the vNIC is inherited from the pNIC it is affinitized to, which is why it shows only 1 Gbps.

    When using the Dynamic load balancing algorithm, the link speed of the team is shown as the cumulative link speed of the team member NICs. Therefore, with 4x 1 Gbps connections, it shows a link speed of 4 Gbps.

    In the example below, there is a SET team (named S2DSwitch) was created from the physical NICs HPE Ethernet 10/25Gb 2-port 622FLR-SFP28 Converged Network Adapter and HPE Ethernet 10/25Gb 2-port 622FLR-SFP28 Converged Network Adapter #2 (as you can see in the output below).

    In case of Dynamic load balancing algorithm, the link speed of the team will be shown as the cumulative link speed of the team member NICs, in this example we have 2 member NICs with 25Gbps link speed, hence the team interface shows 2 x 25 = 50Gbps

    User's image

    In case of Hyper-V Port load balancing algorithm, a vNIC is affinitized to a single pNIC. The link speed of the vNIC is then inherited of the pNIC link speed where it is affinitized to.

    User's image

    In LBFO, default load balancing algorithm is Dynamic hence the 20Gbps reported. That does not mean we advised to change the LB from Hyper-V to Dynamic, let Hypv LB for Hyper-V workloads.

    I hope the information above is helpful.

    Best regards

    Zunhui

    ============================================

    If the Answer is helpful, please click "Accept Answer" and upvote it.

    1 person found this answer helpful.
    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.