From my experience The Likely Culprit: The Intel I219-V Driver and it is a common problem.
The Intel I219-V is a very common integrated NIC, but it's infamous for having its driver's built-in VLAN filtering/acceleration features interfere with the Hyper-V Virtual Switch.
When you create an external vSwitch, Hyper-V takes over the physical NIC. The NIC's driver, despite being current, might be "helpfully" stripping or filtering the VLAN tags before the Hyper-V switch can handle the traffic correctly for your trunk.
The Fix: Disable Hardware Offloading
You need to disable the NIC's own filtering capabilities so the Hyper-V virtual switch handles all the tagging. This is usually done in the device's Advanced Properties or via the Windows Registry.
Device Manager Method:
Go to Device Manager $\to$ Network Adapters $\to$ Intel(R) Ethernet Connection (2) (or whatever your I219-V is named).
Right-click and choose Properties > Advanced tab.
Look for properties like:
VLAN ID (should be 0 or "None").
VLAN Filtering (or VLAN Tagging). Try setting this to Disabled or 0.
Priority & VLAN or QoS (try disabling this, too).
Registry Method (More Technical): If the Device Manager option isn't available, you might need to directly edit the registry, setting the VlanFiltering value to 0 for that specific adapter's key. This is a common fix for Intel NICs in Hyper-V environments.
- Your PowerShell Command is Correct, But Needs Confirmation
The PowerShell command you are using is the correct approach for setting the Hyper-V switch's port to your ESXi VM as a trunk:
PowerShell
Add-VMNetworkAdapter -ManagementOS -Name "vTrunk" -SwitchName "VLAN-vSwitch" -Passthru | Set-VMNetworkAdapterVlan -Trunk -AllowedVlanIdList A,B,C -NativeVlanId X
Key Verification Steps:
VM Network Adapter: Double-check that you are applying this trunk setting to the VM's network adapter within the Windows 11 Hyper-V layer, not to the Management OS adapter.
For a VM: Set-VMNetworkAdapterVlan -VMName "ESXi-VM" -VMNetworkAdapterName "Network Adapter" -Trunk
If you are connecting the host OS to the trunk for testing: Use the original -ManagementOS command you posted. Native VLAN: Ensure the NativeVlanId X you set matches the untagged VLAN on your physical switch port, or set it to 0 if you don't need a native VLAN. The physical switch port must also be configured for 802.1Q trunking and allow the specified VLAN IDs (A, B, C).
- Alternative NIC Suggestion (The Broadcom Advantage)
If the Intel I219-V simply refuses to cooperate, even with offloading disabled, a separate PCIe NIC is the reliable solution.
Typically Intel I350-based or Broadcom/QLogic chips would be preferential, as they have historically offered better Hyper-V compatibility for this specific trunking scenario.
Recommendation: Look for a PCIe card using the Intel I350 chipset (often available in dual or quad-port configurations). Their drivers generally have fewer conflicts with the Windows Hyper-V stack, and if they do, the official Intel drivers usually provide clearer "VLAN Filtering" options to disable.
Give the driver offloading fix a shot first, as it's the most likely software solution to your recurrent problem! How ever I do love a good nested virtualisation so let me know if you need further help.