HowTo Enable Perftest Package for Upstream Kernel

Version 9

    This post describes the usage of perftest package with an upsteam kernel and no OFED installation. The package includes several performance benchmarks over verbs (ib_send_bw, ib_write_bw, ib_send_lat, ib_write_lat, ,etc.).

     

    References

     

    Setup

    # cat /etc/redhat-release

    CentOS Linux release 7.2.1511 (Core)

     

    # uname -r

    4.8.7

     

    Configuration

     

    1. Install the following libraries depending on the OS type:

     

    a. In case of RHEL/CentoOS, run:

    # yum install libmlx5 libmlx4 libibverbs libibumad librdmacm librdmacm-utils libibverbs-utils perftest infiniband-diags

    b. In case of Ubuntu, run:

    # apt-get install libmlx4-1 libmlx5-1 ibutils  rdmacm-utils libibverbs1 ibverbs-utils perftest infiniband-diags

     

    2. Make sure that the following modules are loaded:

    • ib_uverbs
    • ib_core
    • mlx5_ib
    • mlx5_core

     

    Note: For RDMA CM applications (udaddy for example), make sure these modules are also loaded:

    • rdma_ucm
    • rdma_cm
    • ib_ucm      
    • ib_cm

     

    # lsmod | grep mlx

    mlx5_ib               167936  0

    ib_core               208896  15 ib_iser,ib_cm,rdma_cm,ib_umad,ib_srp,nvme_rdma,ib_isert,ib_uverbs,rpcrdma,ib_ipoib,iw_cm,mlx5_ib,ib_srpt,ib_ucm,rdma_ucm

    mlx5_core             344064  1 mlx5_ib

    ptp                    20480  2 e1000e,mlx5_core

     

    # lsmod | grep rdma

    nvme_rdma              28672  0

    nvme_fabrics           20480  1 nvme_rdma

    nvme_core              45056  2 nvme_fabrics,nvme_rdma

    rpcrdma                90112  0

    rdma_ucm               24576  0

    ib_uverbs              61440  2 ib_ucm,rdma_ucm

    rdma_cm                53248  5 ib_iser,nvme_rdma,ib_isert,rpcrdma,rdma_ucm

    ib_cm                  45056  5 rdma_cm,ib_srp,ib_ipoib,ib_srpt,ib_ucm

    iw_cm                  45056  1 rdma_cm

    sunrpc                344064  8 auth_rpcgss,nfsd,rpcrdma,nfs_acl,lockd

    ib_core               208896  15 ib_iser,ib_cm,rdma_cm,ib_umad,ib_srp,nvme_rdma,ib_isert,ib_uverbs,rpcrdma,ib_ipoib,iw_cm,mlx5_ib,ib_srpt,ib_ucm,rdma_ucm

     

     

    # lsmod | grep ib_

    ib_isert               49152  0

    iscsi_target_mod      294912  1 ib_isert

    ib_iser                49152  0

    libiscsi               57344  1 ib_iser

    scsi_transport_iscsi    98304  2 ib_iser,libiscsi

    ib_srpt                45056  0

    target_core_mod       352256  3 iscsi_target_mod,ib_isert,ib_srpt

    ib_srp                 49152  0

    scsi_transport_srp     24576  1 ib_srp

    ib_ipoib               94208  0

    ib_ucm                 20480  0

    ib_uverbs              61440  2 ib_ucm,rdma_ucm

    ib_umad                24576  0

    rdma_cm                53248  5 ib_iser,nvme_rdma,ib_isert,rpcrdma,rdma_ucm

    ib_cm                  45056  5 rdma_cm,ib_srp,ib_ipoib,ib_srpt,ib_ucm

    ib_core               208896  15 ib_iser,ib_cm,rdma_cm,ib_umad,ib_srp,nvme_rdma,ib_isert,ib_uverbs,rpcrdma,ib_ipoib,iw_cm,mlx5_ib,ib_srpt,ib_ucm,rdma_ucm

     

    3. Check the device name (hca_id) and port.

    In this case the, hca_is is mlx5_0 and the port number is 1.

    # ibv_devinfo

    hca_id: mlx5_1

      transport: InfiniBand (0)

      fw_ver: 14.12.1240

      node_guid: 7cfe:9003:00cb:7603

      sys_image_guid: 7cfe:9003:00cb:7602

      vendor_id: 0x02c9

      vendor_part_id: 4117

      hw_ver: 0x0

      board_id: MT_2420110034

      phys_port_cnt: 1

      port: 1

      state: PORT_DOWN (1)

      max_mtu: 4096 (5)

      active_mtu: 1024 (3)

      sm_lid: 0

      port_lid: 0

      port_lmc: 0x00

      link_layer: Ethernet

     

    hca_id: mlx5_0

      transport: InfiniBand (0)

      fw_ver: 14.12.1240

      node_guid: 7cfe:9003:00cb:7602

      sys_image_guid: 7cfe:9003:00cb:7602

      vendor_id: 0x02c9

      vendor_part_id: 4117

      hw_ver: 0x0

      board_id: MT_2420110034

      phys_port_cnt: 1

      port: 1

      state: PORT_ACTIVE (4)

      max_mtu: 4096 (5)

      active_mtu: 1024 (3)

      sm_lid: 0

      port_lid: 0

      port_lmc: 0x00

    link_layer: Ethernet

     

    4. On one of the hosts, run the Server (target) using the device (-d) and port (-i) and any other parameters needed.

    #  ib_send_bw -i 1 -d mlx5_0 -F -D 5 --report_gbits

     

    ************************************

    * Waiting for client to connect... *

    ************************************

     

    5. On the other host, run the Client in a similar way.

    # ib_send_bw -d mlx5_0 -i 1  -F -D 5 --report_gbits 1.1.1.1

    ---------------------------------------------------------------------------------------

                        Send BW Test

    Dual-port       : OFF Device         : mlx5_0

    Number of qps   : 1 Transport type : IB

    Connection type : RC Using SRQ      : OFF

    TX depth        : 128

    CQ Moderation   : 100

    Mtu             : 1024[B]

    Link type       : Ethernet

    Gid index       : 0

    Max inline data : 0[B]

    rdma_cm QPs : OFF

    Data ex. method : Ethernet

    ---------------------------------------------------------------------------------------

    local address: LID 0000 QPN 0x00b1 PSN 0x63274c

    GID: 00:00:00:00:00:00:00:00:00:00:255:255:01:01:01:02

    remote address: LID 0000 QPN 0x00b1 PSN 0xd907b8

    GID: 00:00:00:00:00:00:00:00:00:00:255:255:01:01:01:01

    ---------------------------------------------------------------------------------------

    #bytes     #iterations    BW peak[Gb/sec]    BW average[Gb/sec]   MsgRate[Mpps]

    Warning: measured timestamp frequency 2000.12 differs from nominal 2298.71 MHz

    65536      131100           0.00               22.91     0.043700

    ---------------------------------------------------------------------------------------