Good day,
2019 server, 4 NICs. (dell R710, QLogic BCM5709C nics x 4, DLink managed switch)
Have been using NIC teaming for many years with success on a 2012R2 similar hardware system, one generation older, and while I initially had some of the "network startup woes" where it was fine until a reboot, but then team wouldn't come online... we resolved that fairly quickly, adding a dependancy somewhere along the way, as I recall
This new system, I have been running as a lab machine, but need the bandwidth, and I'd really like it to work too!
I have been playing with a few Unix VM systems, the last of which was ProxMox, which took the LACP config like a pro, and I bonded the 4 NICs to one IP, plugged them in the 4 ports that had been configured to do 802.3ad (LACP), and boom, 4gig connection worked like a champ.
Have installed a headless 2019, to run the HyperV on, and while initial NIC config was direct and automatic, once I created the Team and added the 4 nics to it, and moved the cables over to the 4 ports they should be in, no joy.
I get an UP status for the 4 ports, or disconnect if I pull them.
The team originally said "cable disconnected", but now that I have sent a few enable/disable commands to try and "tickle" it, it's status is just "blank".
My team config is set to: includ the 4 names, lacp mode, dynamic, and fast.
(those happen also to be the same settings the old 2012r2 has been running on for years, except "fast" as that seems to be a new setting since 2012r2)
Doing this via powershell now of course, at the console, so open to any commands, tweaks, etc.
The only other thing i know that ProxMox was setting, was the hash type, and we were using the default of level 2/3.
Open to any thoughts or ideas, or things I can try. :-)
with thanks,
Andrew