1 Reply Latest reply on Nov 16, 2018 3:19 PM by yairi

    Mlx-5 for Ubuntu 18.04 (kernel 4.15.0-36) drops all rdma packets

    yaolin

      We are not using Mellanox OFED driver. Instead, we are using Linux standard driver. So, in order to capture RoCEv2 traffic, we use port mirroring in a switch to copy all traffic between host and target to a monitoring PC. This PC has a Connect4X rNIC. It was running Ubuntu 17.10 with 4.13 kernel. I was able to run Wireshark to capture the RoCEv2 traffic.

       

      I recently upgraded that PC to Ubuntu 18.04 with 4.15 kernel. After that, I can't capture RoCE traffic any more. I debug a bit and got these counters.

       

      yao@Host2:~$ ethtool -S enp21s0f0 | grep rdma

      rx_vport_rdma_unicast_packets: 3112973874

      rx_vport_rdma_unicast_bytes: 1434773360288

      tx_vport_rdma_unicast_packets: 362387261

      tx_vport_rdma_unicast_bytes: 27309669154

       

      So Mlx-5 driver drops all RDMA packets. Why is it doing that? Is there any configuration to change its behavior back to the old one (in Kernel 4.13)? I don't see such a configuration in Mellanox_OFED_Linux_User_Manual_v4_3.pdf.

       

      Here's the driver info in the old kernel:

       

      dotadmin@DavidLenovo:~$ modinfo mlx5_core

      filename: /lib/modules/4.13.0-46-generic/kernel/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.ko

      version:        5.0-0

      license:        Dual BSD/GPL

      description:    Mellanox Connect-IB, ConnectX-4 core driver

      author:         Eli Cohen <eli@mellanox.com>

      srcversion: 79D72EC6EB494E762310F77

      alias: pci:v000015B3d0000A2D3sv*sd*bc*sc*i*

      alias: pci:v000015B3d0000A2D2sv*sd*bc*sc*i*

      alias: pci:v000015B3d0000101Csv*sd*bc*sc*i*

      alias: pci:v000015B3d0000101Bsv*sd*bc*sc*i*

      alias: pci:v000015B3d0000101Asv*sd*bc*sc*i*

      alias: pci:v000015B3d00001019sv*sd*bc*sc*i*

      alias: pci:v000015B3d00001018sv*sd*bc*sc*i*

      alias: pci:v000015B3d00001017sv*sd*bc*sc*i*

      alias: pci:v000015B3d00001016sv*sd*bc*sc*i*

      alias: pci:v000015B3d00001015sv*sd*bc*sc*i*

      alias: pci:v000015B3d00001014sv*sd*bc*sc*i*

      alias: pci:v000015B3d00001013sv*sd*bc*sc*i*

      alias: pci:v000015B3d00001012sv*sd*bc*sc*i*

      alias: pci:v000015B3d00001011sv*sd*bc*sc*i*

      depends:        devlink,ptp,mlxfw

      intree:         Y

      name: mlx5_core

      vermagic: 4.13.0-46-generic SMP mod_unload

      signat:         PKCS#7

      signer:        

      sig_key:       

      sig_hashalgo:   md4

      parm: debug_mask:debug mask: 1 = dump cmd data, 2 = dump cmd exec time, 3 = both. Default=0 (uint)

      parm: prof_sel:profile selector. Valid range 0 - 2 (uint)

       

       

      Here's the driver info in the new kernel:

       

      yao@Host2:~$ modinfo mlx5_core

      filename:       /lib/modules/4.15.0-36-generic/kernel/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.ko

      version:        5.0-0

      license:        Dual BSD/GPL

      description:    Mellanox Connect-IB, ConnectX-4 core driver

      author:         Eli Cohen <eli@mellanox.com>

      srcversion:     C271CE9036D77E924A8E038

      alias:          pci:v000015B3d0000A2D3sv*sd*bc*sc*i*

      alias:          pci:v000015B3d0000A2D2sv*sd*bc*sc*i*

      alias:          pci:v000015B3d0000101Csv*sd*bc*sc*i*

      alias:          pci:v000015B3d0000101Bsv*sd*bc*sc*i*

      alias:          pci:v000015B3d0000101Asv*sd*bc*sc*i*

      alias:          pci:v000015B3d00001019sv*sd*bc*sc*i*

      alias:          pci:v000015B3d00001018sv*sd*bc*sc*i*

      alias:          pci:v000015B3d00001017sv*sd*bc*sc*i*

      alias:          pci:v000015B3d00001016sv*sd*bc*sc*i*

      alias:          pci:v000015B3d00001015sv*sd*bc*sc*i*

      alias:          pci:v000015B3d00001014sv*sd*bc*sc*i*

      alias:          pci:v000015B3d00001013sv*sd*bc*sc*i*

      alias:          pci:v000015B3d00001012sv*sd*bc*sc*i*

      alias:          pci:v000015B3d00001011sv*sd*bc*sc*i*

      depends:        devlink,ptp,mlxfw

      retpoline:      Y

      intree:         Y

      name:           mlx5_core

      vermagic:       4.15.0-36-generic SMP mod_unload

      signat:         PKCS#7

      signer:

      sig_key:

      sig_hashalgo:   md4

      parm:           debug_mask:debug mask: 1 = dump cmd data, 2 = dump cmd exec time, 3 = both. Default=0 (uint)

      parm:           prof_sel:profile selector. Valid range 0 - 2 (uint)

       

      So srcversion is different, even though version is the same.

        • Re: Mlx-5 for Ubuntu 18.04 (kernel 4.15.0-36) drops all rdma packets
          yairi
          This message was posted by Yair Ifergan on behalf of Martijn van Breugel

          Hello Yao,

          Thank you for posting your question on the Mellanox Community.

          Based on the information provided, the counters you are mentioning (rx_vport_rdma*/tx_vport_rdma*) do not display drops, these are informative counters related to the RDMA packets received and transmitted.

          Please use the following link for a more information related to the mlx5 counters -> https://community.mellanox.com/docs/DOC-2532

          In the current upstream kernel, the only way to capture RDMA(RoCE) traffic is through the following validated and tested solution through a Docket container.

          Kindly refer to the following procedure:

          In order to capture RDMA packets , kindly refer to the following procedure using a Docker (Container) 

          This container is simple, elegant and fastest way for users to start using RDMA devices to capture and analyze RDMA packets using favorite tool tcpdump.
          tcpdump is extended to directly sniff/capture traffic using RDMA verbs making using of latest Linux kernel services.

          We tested the Docker on the following setup:


          Installation instructions:

          • Install OS that is compatible with kernel 4.9 and above .
          • Install Upstream kernel starting from version 4.9 support sniffing RDMA(RoCE) traffic. 
               
            • # yum install docker
            •  
            • # docker pull mellanox/tcpdump-rdma
            •  
            • # service docker start
            •  
            • # docker run -it -v /dev/infiniband:/dev/infiniband -v /tmp/traces:/tmp/traces --net=host --privileged mellanox/tcpdump-rdma bash
          • Install MFT 4.9
          • Install perftest package
          • Capture RoCE packets with the following:
          • # tcpdump -i mlx5_0 -s 0 -w /tmp/traces/capture1.pcap
          • Run ib_write_bw test , as below: 
               
            • Server : # ib_write_bw -d mlx5_0 -a -F
            •  
            • Client: # ib_write_bw -a -F 
          • Open the pcap through wireshark to verify .

          In case of using MLNX_OFED, please refer to Section 3.1.16 'Offloaded Traffic Sniffer' of the UM ( http://www.mellanox.com/related-docs/prod_software/Mellanox_OFED_Linux_User_Manual_v4_4.pdf ) and the Mellanox Community document -> https://community.mellanox.com/docs/DOC-2416

          Thanks and regards,
          ~Mellanox Technical Support