3 Replies Latest reply on Oct 16, 2017 7:11 PM by inbusiness

    Unstable ConnectX-3 Ethernet Performance on ESXi 6.5 update 1

    inbusiness

      Step 01. Build a initial network connection with TCP/IP (lossy) network

       

      Hi!

      I'm start build a basic etherent network that mentioned previously.

      - check this link Build a Mellanox Infrastructure on ESXi 6.5U1

       

      But performance level isn't reached 56Gb level.

      I'm test with iPerf 8 parallel option on seperated ESXi 6.5 update 1 hosts.

      Results are belows.

       

      A. ESXi 6.5 inbox Ethernet driver 3.16.0

      * 56Gb Ethernet iPerf client - physical ESXi 6.5 update 1 host 01 with MTU 4092

      ETH_iPerf_Client_Performance_02.JPG

      * 56Gb Ethernet iPerf server - physical ESXi 6.5 update 1 host 01 with MTU 4092

      ETH_iPerf_Server_Performance_01.JPG

      B.ESXi Driver 1.9.10.6 Ethernet driver

      * 56Gb Ethernet iPerf client - physical ESXi 6.5 update 1 host 01 with MTU 4092

      ETH_iPerf_Client_Performance_02.JPG

      * 56Gb Ethernet iPerf server - physical ESXi 6.5 update 1 host 02 with MTU 4092

      ETH_iPerf_Server_Performance_01.JPG

      B.ESXi Driver 1.8.2.5 IPoIB driver

      * 56Gb IPoIB iPerf client - physical ESXi 6.5 update 1 host 01 with MTU 4092

      IPoIB_iPerf_Client_Performance_02.JPG

      * 56Gb IPoIB iPerf server - physical ESXi 6.5 update 1 host 01 with MTU 4092

      IPoIB_iPerf_Server_Performance_01.JPG

      You said Mellanox ConnectX-3 support 56Gb Ethernet link-up and performance, but it isn't reaced at 40, 50Gb performance level.

      Same test was completed on Intel X520DA2 with SX6036G 10Gb port, they shows me a stable 10GbE performance.

      But your ConnectX-3 shows me a unstable performance pattern in 10, 40, 56GbE port modes.

      How can I resolve this issue?

       

      BR,

      Jae-Hoon Choi

        • Re: Unstable ConnectX-3 Ethernet Performance on ESXi 6.5 update 1
          aviap

          Would suggest that you start first with:

          1. “Performance Tuning” for Mellanox Adapters & driver, to ensure you have proper BIOS configuration, to check the cpu core frequency and that you use PCIe generation that suit the adapter etc…

          https://community.mellanox.com/docs/DOC-2489#jive_content_id_Getting_started

          2. Check the Mellanox inbox driver by using "esxcli software vib list |grep nmlx4" command.

          for example:
          # esxcli software vib list |grep nmlx4
          Name                           Version                               Vendor  Acceptance Level  Install Date
          -----------------------------  ------------------------------------  ------  ----------------  ------------
          nmlx4-core                     3.0.0.0-1vmw.600.0.0.2494585          VMware  VMwareCertified   2015-12-16
          nmlx4-en                       3.0.0.0-1vmw.600.0.0.2494585          VMware  VMwareCertified   2015-12-16
          nmlx4-rdma                     3.0.0.0-1vmw.600.0.0.2494585          VMware  VMwareCertified   2015-12-16

          3. Assuming that the56Gb nic is up check the netqueue is enabled by default.
          netqueue.jpg

          4. Set nmlx4_en module parameters num_rings_per_rss_queue to 4.
          # esxcli system module parameters set -m nmlx4_en -p "num_rings_per_rss_queue=4"
          # reboot
          # esxcli system module parameters list -m nmlx4_en

          In case the suggested above is implemented and you still have low performance, then apply to Mellanox support (support@mellanox.com) to get further assistance on this