linux vm dpdk test error "hn_vf_add(): RNDIS reports VF but device not found, retrying"

Jimmy 0 Reputation points
2025-01-28T00:34:26.7133333+00:00

I built a customized linux vm, when I run dpdk test with dpdk-testpmd, it shows error as
"hn_vf_attach(): Couldn't find port for VF

hn_vf_add(): RNDIS reports VF but device not found, retrying"

the full log as below.

app/dpdk-testpmd -l 1-3 --vdev="$BUS_INFO,mac=$MANA_MAC" -- --forward-mode=txonly --auto-start --txd=128 --rxd=128 --stats 2

EAL: Detected CPU lcores: 8

EAL: Detected NUMA nodes: 1

EAL: Detected static linkage of DPDK

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'PA'

EAL: VFIO support initialized

EAL: Probe PCI driver: mlx5_pci (15b3:1016) device: deed:00:02.0 (socket 0)

mlx5_common: DevX read access NIC register=0X9055 failed errno=22 status=0 syndrome=0

mlx5_net: No available register for sampler.

mlx5_common: DevX create q counter set failed errno=22 status=0 syndrome=0

mlx5_common: Key "mac" is unknown for the provided classes.

EAL: Requested device deed:00:02.0 cannot be used

EAL: Bus (pci) probe failed.

hn_vf_attach(): Couldn't find port for VF

hn_vf_add(): RNDIS reports VF but device not found, retrying

TELEMETRY: No legacy callbacks, legacy socket not created

Set txonly packet forwarding mode

Auto-start selected

testpmd: create a new mbuf pool <mb_pool_0>: n=163456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)

hn_vf_attach(): Couldn't find port for VF

hn_vf_add(): RNDIS reports VF but device not found, retrying

Port 0: 00:0D:3A:34:69:DA

Checking link statuses...

Done

No commandline core given, start packet forwarding

txonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native

Logical Core 2 (socket 0) forwards packets on 1 streams:

RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

txonly packet forwarding packets/burst=32

packet len=64 - nb packet segments=1

nb forwarding cores=1 - nb forwarding ports=1

port 0: RX queue number: 1 Tx queue number: 1

Rx offloads=0x0 Tx offloads=0x0

RX queue: 0

  RX desc=128 - RX free threshold=0

  RX threshold registers: pthresh=0 hthresh=0  wthresh=0

  RX Offloads=0x0

TX queue: 0

  TX desc=128 - TX free threshold=0

  TX threshold registers: pthresh=0 hthresh=0  wthresh=0

  TX offloads=0x0 - TX RS bit threshold=0

Port statistics ====================================

######################## NIC statistics for port 0 ########################

RX-packets: 0 RX-missed: 0 RX-bytes: 0

RX-errors: 0

RX-nombuf: 0

TX-packets: 0 TX-errors: 0 TX-bytes: 0

Throughput (since last show)

Rx-pps: 0 Rx-bps: 0

Tx-pps: 0 Tx-bps: 0

############################################################################

hn_vf_attach(): Couldn't find port for VF

hn_vf_add(): RNDIS reports VF but device not found, retrying

hn_vf_attach(): Couldn't find port for VF

hn_vf_add(): RNDIS reports VF but device not found, retrying

**

dmesg log as:

[ 6.151804] IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0: link becomes ready

[ 8.609137] mlx5_core 75d3:00:02.0 enP30163s1: Disabling LRO, not supported in legacy RQ

[ 8.610859] mlx5_core 75d3:00:02.0 enP30163s1: Disabling LRO, not supported in legacy RQ

[ 8.612864] mlx5_core 75d3:00:02.0 enP30163s1: Disabling LRO, not supported in legacy RQ

[ 8.613831] mlx5_core 75d3:00:02.0 enP30163s1: Disabling LRO, not supported in legacy RQ

[ 8.821302] loop0: detected capacity change from 0 to 8

[ 8.842848] mlx5_core 80bc:00:02.0 enP32956s2: Disabling LRO, not supported in legacy RQ

[ 8.844700] mlx5_core 80bc:00:02.0 enP32956s2: Disabling LRO, not supported in legacy RQ

[ 8.847495] mlx5_core 80bc:00:02.0 enP32956s2: Disabling LRO, not supported in legacy RQ

[ 8.849066] mlx5_core 80bc:00:02.0 enP32956s2: Disabling LRO, not supported in legacy RQ

[ 9.035447] kauditd_printk_skb: 3 callbacks suppressed

[ 9.035451] audit: type=1400 audit(1738022329.631:14): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/snapd/snap-confine" pid=1035 comm="apparmor_parser"

[ 9.064196] audit: type=1400 audit(1738022329.659:15): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/snapd/snap-confine//mount-namespace-capture-helper" pid=1035 comm="apparmor_parser"

[ 9.067151] mlx5_core deed:00:02.0 enP57069s3: Disabling LRO, not supported in legacy RQ

[ 9.068993] mlx5_core deed:00:02.0 enP57069s3: Disabling LRO, not supported in legacy RQ

[ 9.071117] mlx5_core deed:00:02.0 enP57069s3: Disabling LRO, not supported in legacy RQ

[ 9.072348] mlx5_core deed:00:02.0 enP57069s3: Disabling LRO, not supported in legacy RQ

[ 10.017708] fbcon: Taking over console

[ 10.017853] Console: switching to colour frame buffer device 128x48

[ 51.026576] hv_balloon: Max. dynamic memory size: 32768 MB

[ 534.610255] RPC: Registered named UNIX socket transport module.

[ 534.610260] RPC: Registered udp transport module.

[ 534.610261] RPC: Registered tcp transport module.

[ 534.610261] RPC: Registered tcp NFSv4.1 backchannel transport module.

[ 535.082260] RPC: Registered rdma transport module.

[ 535.082264] RPC: Registered rdma backchannel transport module.

[ 1115.762368] hv_netvsc 000d3a34-69da-000d-3a34-69da000d3a34 enp2s2: Data path switched from VF: enP57069s3

[ 1115.825184] hv_vmbus: registering driver uio_hv_generic

[ 1115.825710] hv_netvsc 000d3a34-69da-000d-3a34-69da000d3a34 enp2s2: VF unregistering: enP57069s3

[ 1115.825719] mlx5_core deed:00:02.0 enP57069s3: Disabling LRO, not supported in legacy RQ

[ 1164.343188] mlx5_core deed:00:02.0 enP57069s3: Link up







Azure Virtual Machines
Azure Virtual Machines
An Azure service that is used to provision Windows and Linux virtual machines.
8,378 questions
{count} votes

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.