5 Replies Latest reply on Jul 23, 2018 4:38 AM by karen

    DPDK with MLX4 VF on Hyper-v VM

    brookletling

      Hello,

       

      (Not sure if this is the right place to ask, if not , please kindly point out the right place or person)

       

      I have a connect-3 CX354a installed on my Window Server 2016, and enabled SR-IOV on the card and the server, following the WinOF user guide.

       

      On the ubuntu 18.02 VM created using MS Hyperv-v, I can see the mlnx VF working properly. But when I tried to run testpmd on the VM using the following command:

      ./testpmd -l 0-1 -n 4 --vdev=net_vdev_netvsc0,iface=eth1,force=1 -w 0002:00:02.0  --vdev=net_vdev_netvsc0,iface=eth2,force=1 -w 0003:00:02.0 -- --rxq=2 --txq=2 -i

       

      I ran into an error:

       

      PMD: mlx4.c:138: mlx4_dev_start(): 0x562bff05e040: cannot attach flow rules (code 12, "Cannot allocate memory"), flow error type 2, cause 0x7f39ef408780, message: flow rule rejected by device

       

      The command format was suggested by MS Azure to run DPDK on their AN-enabled VMs. And it does work on Azure VMs.

       

      Here is ibv_devinfo output from my vm:

       

      root@myVM:~/MLNX_OFED_SRC-4.4-1.0.0.0# ibv_devinfo

      hca_id:    mlx4_0

          transport:            InfiniBand (0)

          fw_ver:                2.42.5000

          node_guid:            0014:0500:691f:c3fa

          sys_image_guid:            ec0d:9a03:001c:92e3

          vendor_id:            0x02c9

          vendor_part_id:            4100

          hw_ver:                0x0

          board_id:            MT_1090120019

          phys_port_cnt:            1

              port:    1

                  state:            PORT_DOWN (1)

                  max_mtu:        4096 (5)

                  active_mtu:        1024 (3)

                  sm_lid:            0

                  port_lid:        0

                  port_lmc:        0x00

                  link_layer:        Ethernet

       

      hca_id:    mlx4_1

          transport:            InfiniBand (0)

          fw_ver:                2.42.5000

          node_guid:            0014:0500:76e0:b9d1

          sys_image_guid:            ec0d:9a03:001c:92e3

          vendor_id:            0x02c9

          vendor_part_id:            4100

          hw_ver:                0x0

          board_id:            MT_1090120019

          phys_port_cnt:            1

              port:    1

                  state:            PORT_DOWN (1)

                  max_mtu:        4096 (5)

                  active_mtu:        1024 (3)

                  sm_lid:            0

                  port_lid:        0

                  port_lmc:        0x00

                  link_layer:        Ethernet

       

      and here is the kernel module info of mlx4_en:

       

      filename:   /lib/modules/4.15.0-23-generic/updates/dkms/mlx4_en.ko
      version:    4.4-1.0.0
      license:    Dual BSD/GPL
      description:Mellanox ConnectX HCA Ethernet driver
      author:     Liran Liss, Yevgeny Petrilin
      srcversion: 23E8E7A25194AE68387DC95
      depends:    mlx4_core,mlx_compat,ptp,devlink
      retpoline:  Y
      name:       mlx4_en
      vermagic:   4.15.0-23-generic SMP mod_unload
      parm:       udev_dev_port_dev_id:Work with dev_id or dev_port when supported by the kernel. Range: 0 <= udev_dev_port_dev_id <= 2 (default = 0).
         0: Work with dev_port if supported by the kernel, otherwise work with dev_id.
         1: Work only with dev_id regardless of dev_port support.
         2: Work with both of dev_id and dev_port (if dev_port is supported by the kernel). (int)
      parm:       udp_rss:Enable RSS for incoming UDP traffic or disabled (0) (uint)
      parm:       pfctx:Priority based Flow Control policy on TX[7:0]. Per priority bit mask (uint)
      parm:       pfcrx:Priority based Flow Control policy on RX[7:0]. Per priority bit mask (uint)
      parm:       inline_thold:Threshold for using inline data (range: 17-104, default: 104) (uint)

       

      One difference I notice between my local VM and Azure VM is that the mlx4_en.ko module is definitely different. Azure seems to be using a specialized version of mlx4_en. Is this the reason why the testpmd works on azure but not my local Hyper-v VM?

       

      If so, how can I get a DPDK-capable driver for MS Hyper-v?

       

      Thank you!