Hello, everyone!
I've noticed that some of our servers (not all of them, just a few) periodically report abnormally high values of network interfaces throughput, both RX and TX.
I was able to see it both in the Perfmon counters and in Get-NetAdapterStatistics output:
Get-NetAdapterStatistics
Name : VSwitch1
SystemName : HOST-123.domain.local
ReceivedBytes : 14265426347560522450 = 27.073 Pbits
SentBytes : 3613230990169090807 = 114.123 Pbits
Perfmon:
Get-Counter -Counter "\Network Interface\Bytes received/sec"
\\host-123\network interface(broadcom netxtreme e-series advanced dual-port 10gb sfp+ ethernet ocp 3.0 adapter _2)\bytes received/sec : 1.49552927307913E+16 = 119.642 Pbits
Get-Counter -Counter "\Network Interface\Bytes sent/sec"
\\host-123\network interface(broadcom netxtreme e-series advanced dual-port 10gb sfp+ ethernet ocp 3.0 adapter _2)\bytes sent/sec : 6.17142406383789E+15 = 49.371 Pbits
As you can see - it shows petabits per second for 10G NICs that is obviously not true.
Same picture we observe on another server that has Intel E810 25G NICs.
It's not a constant picture - in general it shows normal values, but periodically we observe such abnormal peaks on the graph.
What could possibly cause such issue? We've seen it on both Broadcom and Intel NICs so it is hard to blame drivers. Also we see it only on a few servers, even though they all run Windows Server 2019. We run Hyper-V on these servers and NICs are members of SET team.
Can it be due to some bug which has been fixed by some updates?
Thank you in advance!