We've encountered an issue with the PowerEdge R730xd running Windows Server 2022 DC as a Hyper-V host as a first host, where the server crashes and reboots due to logical switch settings in System Center Virtual Machine Manager 2022.
The host has four onboard NICs (2 x Intel Gigabit 4P X540/I350 & 2 x Intel 10GB 4P X540/I360).
Two Gigabit ports are teamed and used as host NICs, while the two 10GB ports are connected to a switch as trunked NICs with LACP enabled.
After configuring the logical switch and assigning it to the Host, powering up the VM causes the host to blue screen with a 'vmswitch.sys' issue.
Unless the VM is configured not to restart, it continuously blue screens and reboots.
We've ensured the NIC's are running the latest BIOS and all other server components are up-to-date with the latest drivers from Dell, but the issue persists.
Currently only way to get this to work is if we configure the LACP on the local host and get VMM to use this instead on VMM taking over and managing the teaming.
I would be interested to know how this should be configured if what I am doing is not correct as with the previous versions of VMM, this is pretty much how we have been configuring this.
Thanks.