Getting started with DPDK and ConnectX-5 100Gb/s

Version 1

    This post provides detailed steps for people that are not familiar with Mellanox cards and DPDK to connectX-5 and use it with DPDK. In this tutorial, we are using the Model No: CX555A that supports Infiniband (IB). steps related to IB setup should be skipped if your card is an Ethernet card.

     

    We are using an Ubuntu 16.04.2 LTS for this setup.

     

    References

    Mellanox DPDK Quick Start Guide Rev 16.11_2.3

    Mellanox Firmware Tools (MFT) User Manual Rev 2.9

    Mellanox OFED for Linux User Manual Rev 2.2-1.0.1

    ConnectX®-5 VPI Single and Dual Port QSFP28 Adapter Card User Manual Rev 1.1

     

     

    Setup

    This section shows how to setup the cards installed on a machine.

    Checking the Hardware

    In this example, two ConnectX-5 NICs are installed on the PCI port 88:00.0 and 89:00.0

    node1:~$ lspci -v | grep Mellanox

    88:00.0 Infiniband controller: Mellanox Technologies MT28800 Family [ConnectX-5]

            Subsystem: Mellanox Technologies MT28800 Family [ConnectX-5]

    89:00.0 Infiniband controller: Mellanox Technologies MT28800 Family [ConnectX-5]

            Subsystem: Mellanox Technologies MT28800 Family [ConnectX-5]

    Installing Mellanox OpenFabrics Enterprise Distribution for Linux (MLNX_OFED)

     

    MLNX_OFED is available for download at the Mellanox website. you can download it directly here.

    DO NOT use wget to download it directly to the machine as you will need to accept the agreement first. If you wget it, you will have only a HTML page instead of the tgz file.

     

    Untar the OFED:

    node1:~$ tar -xvf MLNX_OFED_LINUX-4.1-1.0.2.0-ubuntu16.04-x86_64.tgz

    It should look like following:

    node1:~$ cd MLNX_OFED_LINUX-4.1-1.0.2.0-ubuntu16.04-x86_64/

    node1:~/MLNX_OFED_LINUX-4.1-1.0.2.0-ubuntu16.04-x86_64$ ls

    common.pl                       DEBS    docs     mlnx_add_kernel_support.sh  RPM-GPG-KEY-Mellanox  uninstall.sh

    create_mlnx_ofed_installers.pl  distro  LICENSE  mlnxofedinstall             src

    Then run the install script. Answer yes to confirm the installation  :

    node1:~/MLNX_OFED_LINUX-4.1-1.0.2.0-ubuntu16.04-x86_64$ sudo ./mlnxofedinstall

    [sudo] password for antd:

    Log: /tmp/ofed.build.log

    Logs dir: /tmp/MLNX_OFED_LINUX.16524.logs

     

    Below is the list of MLNX_OFED_LINUX packages that you have chosen

    (some may have been added by the installer due to package dependencies):

    ...

    ...

    Do you want to continue?[y/N]:y

     

    Checking SW Requirements...

    One or more required packages for installing MLNX_OFED_LINUX are missing.

    Attempting to install the following missing packages:

    autoconf graphviz pkg-config quilt gfortran dpatch autotools-dev m4 debhelper tk bison python-libxml2 chrpath swig libltdl-dev flex automake libgfortran3 tcl

    Removing old packages...

    Installing new packages

    ...

    ...

    Please reboot your system for the changes to take effect.

    Device (88:00.0):

            88:00.0 Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5]

            Link Width: x16

            PCI Link Speed: 8GT/s

     

    Device (89:00.0):

            89:00.0 Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5]

            Link Width: x16

            PCI Link Speed: 8GT/s

     

    Installation passed successfully

    To load the new driver, run:

    /etc/init.d/openibd restart

     

     

    Reboot your your system for the changes to take effect.

     

    Configuring the NICs

     

    Using the Mellanox Software Tools (MST), check that the cards are detected:

    node1:~$ sudo mst status -v

    MST modules:

    ------------

        MST PCI module is not loaded

        MST PCI configuration module is not loaded

    PCI devices:

    ------------

    DEVICE_TYPE             MST      PCI       RDMA            NET                       NUMA

    ConnectX5(rev:0)        NA       88:00.0   mlx5_0          net-ib0                   1

     

    ConnectX5(rev:0)        NA       89:00.0   mlx5_1          net-ib1                   1

     

    By default, the cards are IB. you can check that using the following command:

    node1:~$ sudo mlxconfig -d mlx5_0 q  | grep -i type

    Device type:    ConnectX5

             LINK_TYPE_P1                        IB(1)

     

    The link type is 1 (IB). It should be changed to 2 (Ethernet) using the following command and answer yes to apply the configuration:

     

    node1:~$ sudo mlxconfig -d mlx5_0 set LINK_TYPE_P1=2

     

    Device #1:

    ----------

     

    Device type:    ConnectX5

    PCI device:     mlx5_0

     

    Configurations:                              Next Boot       New

             LINK_TYPE_P1                        IB(1)           ETH(2)

     

    Apply new Configuration? ? (y/n) [n] : y

    Applying... Done!

    -I- Please reboot machine to load new configurations.

    As we have two cards installed in that machine, the second card needs to be configured as Ethernet. we applied the command above  to the second device mlx5_1

     

    Reboot the machine to load new configurations.

     

    After rebooting,

     

    Start MST :

    node1:~$ sudo mst start

    Starting MST (Mellanox Software Tools) driver set

    Loading MST PCI module - Success

    Loading MST PCI configuration module - Success

    Create devices

    Unloading MST PCI module (unused) - Success

     

    Check if configuration was applied :

    node1:~$ sudo mst status -v

    MST modules:

    ------------

        MST PCI module is not loaded

        MST PCI configuration module loaded

    PCI devices:

    ------------

    DEVICE_TYPE             MST                           PCI       RDMA            NET                       NUMA

    ConnectX5(rev:0)        /dev/mst/mt4119_pciconf1      89:00.0   mlx5_1         net-enp137s0              1

     

    ConnectX5(rev:0)        /dev/mst/mt4119_pciconf0      88:00.0   mlx5_0          net-enp136s0              1

     

     

    So far, the NICs are ready to use as regular Ethernet cards.

     

    Installing Data Plane Development Kit (DPDK)

    This section describes how to install DPDK and packet generator and how to use them with the ConnectX-5.

    For this tutorial, we are using the 16.11 of DPDK. It can be downloaded from Mellanox website to directly using this link.

     

    Download and untar DPDK in /opt or your favorite folder. it should look like following:

    node1:/opt/MLNX_DPDK_16.11_3.0$ ls

    app         config  drivers   GNUmakefile  LICENSE.GPL   MAINTAINERS  mk   README   tools

    buildtools  doc     examples  lib          LICENSE.LGPL  Makefile     pkg  scripts

     

    Edit the file  ./config/common_base

    node1:/opt/MLNX_DPDK_16.11_3.0$ nano ./config/common_base

    apply the flowing changes :

     

    ...

    ...

    #

    # Compile burst-oriented Mellanox ConnectX-4 & ConnectX-5 (MLX5) PMD

    #

    CONFIG_RTE_LIBRTE_MLX5_PMD=y

    CONFIG_RTE_LIBRTE_MLX5_DEBUG=y

    CONFIG_RTE_LIBRTE_MLX5_TX_MP_CACHE=8

    ...

    ...

     

    Build and configure DPDK

    We will use the setup script provided with DPDK to build it, configure HugePages, and test it.

     

    Run the setup script as flowing :

    node1:/opt/MLNX_DPDK_16.11_3.0$ sudo ./tools/dpdk-setup.sh

     

    Step 1: Select the DPDK environment to build

     

    Choose the option [13] x86_64-native-linuxapp-gcc or the option that fits your hardware to compile DPDK

     

    Step 2: Setup linuxapp environment

     

    Choose the option [20] Setup hugepage mappings for NUMA systems or non numa according to your hardware

     

    As we are planning to use several VMs later on ( not included in this tutorial), we are associating 2048*2M on each numa node. Note that 64*2M is OK if you are only testing.

    The configuration for this step should look like the following:

    ...

    ...

    Option: 20

     

    Removing currently reserved hugepages

    Unmounting /mnt/huge and removing directory

     

      Input the number of 2048kB hugepages for each node

      Example: to have 128MB of hugepages available per node in a 2MB huge page system,

      enter '64' to reserve 64 * 2MB pages on each node

    Number of pages for node0: 2048

    Number of pages for node1: 2048

    Reserving hugepages

    Creating /mnt/huge and mounting as hugetlbfs

     

    Press enter to continue ...

     

    We don't need to setup IGB UIO because we are using mellanox NICS. Details are provided in this thread on Mellanox forum.

    Step 3: Run test application for linuxapp environment

    Now we can run a test by choosing the option [26] Run testpmd application in interactive mode ($RTE_TARGET/app/testpmd) . We are using many cores in this example fff. you can use only few depending on your hardware.

    Option: 26

     

     

      Enter hex bitmask of cores to execute testpmd app on

      Example: to execute app on cores 0 to 7, enter 0xff

    bitmask: fff

     

    Installing packet generator

     

    We are installing Pktgen in /opt for this tutorial.

     

    In order to get the sources, git clone it using the following command:

    node1:/opt$ git clone git://dpdk.org/apps/pktgen-dpdk

     

    Pktgen requires a set of libraries that can be installed using the following command:

    node1:/opt/pktgen-dpdk$ sudo apt-get install -y libpcap-dev

    Environment variables needs also to be exported. they should point to the DPDK directory. Example:

    node1:/opt/pktgen-dpdk$ export RTE_SDK=/opt/MLNX_DPDK_16.11_3.0/

    node1:/opt/pktgen-dpdk$ export RTE_TARGET=x86_64-native-linuxapp-gcc

    At this stage, Pktgen can be built using a simple make command as following:

    node1:/opt/pktgen-dpdk$ make

    Look for the divice that you whant to use.

    node1:/opt/pktgen-dpdk$ sudo lspci | grep -i mellanox

    88:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]

    89:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]

     

    Run the pktgen fot that device

    node1:/opt/pktgen-dpdk$ sudo ./app/app/x86_64-native-linuxapp-gcc/pktgen -c 0x1f -n 3 -w 89:00.0 -- -T -P -m "[1:3].0"

    Then it starts and you can start generating packets using the command str.

    / Ports 0-0 of 1   <Main Page>  Copyright (c) <2010-2016>, Intel Corporation
      Flags:Port      :   P--------------:0
    Link State        :      <UP-100000-FD>     ----TotalRate----
    Pkts/s Max/Rx     :          18928990/0            18928990/0
           Max/Tx     :   19622976/19612225     19622976/19612225
    MBits/s Rx/Tx     :             0/13179               0/13179
    Broadcast         :                   0
    Multicast         :                   0
      64 Bytes        :           869048470
      65-127          :                   0
      128-255         :                   0
      256-511         :                   0
      512-1023        :                   0
      1024-1518       :                   0
    Runts/Jumbos      :                 0/0
    Errors Rx/Tx      :                 0/0
    Total Rx Pkts     :           869048470
          Tx Pkts     :           151800640
          Rx MBs      :              584000
          Tx MBs      :              102010
    ARP/ICMP Pkts     :                 0/0
                      :
    Pattern Type      :             abcd...
    Tx Count/% Rate   :       Forever /100%
    PktSize/Tx Burst  :           64 /   32
    Src/Dest Port     :         1234 / 5678
    Pkt Type:VLAN ID  :     IPv4 / TCP:0001
    Dst  IP Address   :         192.168.1.1
    Src  IP Address   :      192.168.0.1/24
    Dst MAC Address   :   00:00:00:00:00:00
    Src MAC Address   :   24:8a:07:a8:09:34
    VendID/PCI Addr   :   15b3:1017/05:00.0

    -- Pktgen Ver: 3.3.4 (DPDK 16.11.0)  Powered by Intel® DPDK -------------------



    ** Version: DPDK 16.11.0, Command Line Interface with timers
    Pktgen:/> str
    Pktgen:/>