You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are trying to run VPP on an Ubuntu 22.04 with worker threads over Hyper-V.
We don't have a DPDK implementing HW NIC yet, nor SR-IOV, so we are using the paravirtualized Hyper-V NIC with DPDK support over the Netvsc PMD.
We have tested it in an HP server and an HP laptop, and both fail.
The setup can be so simple as just loading the DPDK and ping plugins.
If we comment out the cpu section of startup.conf (i.e., we configure VPP to use a single thread), it works.
As soon as we add one worker and send a bunch of ICMP Echo Request to one of the interfaces, VPP aborts:
cpu
{
main-core 1
workers 1
}
The coredump always says it is a problem in a DPDK spinlock accessed by Netvsc:
Thread 3 "vpp_wk_0" received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fffa7d2a640 (LWP 1296)]
0x00007fffb0190710 in rte_spinlock_trylock (sl=0x20) at ../src-dpdk/lib/eal/x86/include/rte_spinlock.h:63
63 asm volatile (
(gdb) bt
#0 0x00007fffb0190710 in rte_spinlock_trylock (sl=0x20) at ../src-dpdk/lib/eal/x86/include/rte_spinlock.h:63 #1 0x00007fffb01bd531 in hn_process_events (hv=0xac03a7bc0, queue_id=1, tx_limit=0) at ../src-dpdk/drivers/net/netvsc/hn_rxtx.c:1075 #2 0x00007fffb01c55d3 in hn_xmit_pkts (ptxq=0xac08a5040, tx_pkts=0x7fffb7b32e00, nb_pkts=1) at ../src-dpdk/drivers/net/netvsc/hn_rxtx.c:1497 #3 0x00007fffb0b5bbf2 in rte_eth_tx_burst (port_id=0, queue_id=1, tx_pkts=0x7fffb7b32e00, nb_pkts=1)
at /home/ubuntu/vpp/build-root/install-vpp_debug-native/external/include/rte_ethdev.h:6320 #4 0x00007fffb0b57799 in tx_burst_vector_internal (vm=0x7fffb7af8b80, xd=0x7fffb7b37c40, mb=0x7fffb7b32e00, n_left=1, queue_id=1, is_shared=0 '\000')
at /home/ubuntu/vpp/src/plugins/dpdk/device/device.c:173 #5 0x00007fffb0b56ac2 in dpdk_device_class_tx_fn_hsw (vm=0x7fffb7af8b80, node=0x7fffb7b54540, f=0x7fffb7bcfdc0)
at /home/ubuntu/vpp/src/plugins/dpdk/device/device.c:423 #6 0x00007ffff7062a32 in dispatch_node (vm=0x7fffb7af8b80, node=0x7fffb7b54540, type=VLIB_NODE_TYPE_INTERNAL, dispatch_state=VLIB_NODE_STATE_POLLING,
frame=0x7fffb7bcfdc0, last_time_stamp=29933360537300) at /home/ubuntu/vpp/src/vlib/main.c:960 #7 0x00007ffff7063452 in dispatch_pending_node (vm=0x7fffb7af8b80, pending_frame_index=9, last_time_stamp=29933360537300)
at /home/ubuntu/vpp/src/vlib/main.c:1119 #8 0x00007ffff705e7d2 in vlib_main_or_worker_loop (vm=0x7fffb7af8b80, is_main=0) at /home/ubuntu/vpp/src/vlib/main.c:1608 #9 0x00007ffff705ded7 in vlib_worker_loop (vm=0x7fffb7af8b80) at /home/ubuntu/vpp/src/vlib/main.c:1741 #10 0x00007ffff709b050 in vlib_worker_thread_fn (arg=0x7fffb865d600) at /home/ubuntu/vpp/src/vlib/threads.c:1604 #11 0x00007ffff7096236 in vlib_worker_thread_bootstrap_fn (arg=0x7fffb865d600) at /home/ubuntu/vpp/src/vlib/threads.c:418 #12 0x00007ffff6c86ac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442 #13 0x00007ffff6d18850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
The text was updated successfully, but these errors were encountered:
We are trying to run VPP on an Ubuntu 22.04 with worker threads over Hyper-V.
We don't have a DPDK implementing HW NIC yet, nor SR-IOV, so we are using the paravirtualized Hyper-V NIC with DPDK support over the Netvsc PMD.
We have tested it in an HP server and an HP laptop, and both fail.
The setup can be so simple as just loading the DPDK and ping plugins.
If we comment out the cpu section of startup.conf (i.e., we configure VPP to use a single thread), it works.
As soon as we add one worker and send a bunch of ICMP Echo Request to one of the interfaces, VPP aborts:
cpu
{ main-core 1 workers 1 }The coredump always says it is a problem in a DPDK spinlock accessed by Netvsc:
Thread 3 "vpp_wk_0" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffa7d2a640 (LWP 1296)]
0x00007fffb0190710 in rte_spinlock_trylock (sl=0x20) at ../src-dpdk/lib/eal/x86/include/rte_spinlock.h:63
63 asm volatile (
(gdb) bt
#0 0x00007fffb0190710 in rte_spinlock_trylock (sl=0x20) at ../src-dpdk/lib/eal/x86/include/rte_spinlock.h:63
#1 0x00007fffb01bd531 in hn_process_events (hv=0xac03a7bc0, queue_id=1, tx_limit=0) at ../src-dpdk/drivers/net/netvsc/hn_rxtx.c:1075
#2 0x00007fffb01c55d3 in hn_xmit_pkts (ptxq=0xac08a5040, tx_pkts=0x7fffb7b32e00, nb_pkts=1) at ../src-dpdk/drivers/net/netvsc/hn_rxtx.c:1497
#3 0x00007fffb0b5bbf2 in rte_eth_tx_burst (port_id=0, queue_id=1, tx_pkts=0x7fffb7b32e00, nb_pkts=1)
at /home/ubuntu/vpp/build-root/install-vpp_debug-native/external/include/rte_ethdev.h:6320
#4 0x00007fffb0b57799 in tx_burst_vector_internal (vm=0x7fffb7af8b80, xd=0x7fffb7b37c40, mb=0x7fffb7b32e00, n_left=1, queue_id=1, is_shared=0 '\000')
at /home/ubuntu/vpp/src/plugins/dpdk/device/device.c:173
#5 0x00007fffb0b56ac2 in dpdk_device_class_tx_fn_hsw (vm=0x7fffb7af8b80, node=0x7fffb7b54540, f=0x7fffb7bcfdc0)
at /home/ubuntu/vpp/src/plugins/dpdk/device/device.c:423
#6 0x00007ffff7062a32 in dispatch_node (vm=0x7fffb7af8b80, node=0x7fffb7b54540, type=VLIB_NODE_TYPE_INTERNAL, dispatch_state=VLIB_NODE_STATE_POLLING,
frame=0x7fffb7bcfdc0, last_time_stamp=29933360537300) at /home/ubuntu/vpp/src/vlib/main.c:960
#7 0x00007ffff7063452 in dispatch_pending_node (vm=0x7fffb7af8b80, pending_frame_index=9, last_time_stamp=29933360537300)
at /home/ubuntu/vpp/src/vlib/main.c:1119
#8 0x00007ffff705e7d2 in vlib_main_or_worker_loop (vm=0x7fffb7af8b80, is_main=0) at /home/ubuntu/vpp/src/vlib/main.c:1608
#9 0x00007ffff705ded7 in vlib_worker_loop (vm=0x7fffb7af8b80) at /home/ubuntu/vpp/src/vlib/main.c:1741
#10 0x00007ffff709b050 in vlib_worker_thread_fn (arg=0x7fffb865d600) at /home/ubuntu/vpp/src/vlib/threads.c:1604
#11 0x00007ffff7096236 in vlib_worker_thread_bootstrap_fn (arg=0x7fffb865d600) at /home/ubuntu/vpp/src/vlib/threads.c:418
#12 0x00007ffff6c86ac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
#13 0x00007ffff6d18850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
The text was updated successfully, but these errors were encountered: