0 Replies Latest reply on Sep 19, 2018 6:44 AM by doregan21

    Redhat VM's on ESXi 6.5 U1 7967591 Cluster - poor 40G

    doregan21

      Hi All,

       

      Performance issue on 8 x Del R740 servers with ESXi 6.5 and vCenter 6.5

       

      Each server required 4 VM's (Redhat 7.4) to sit on it
      with 1Gb, 10Gb and 40Gb network access and to accommodate that we needed to
      create 10Gb and 40Gb distributed switches. We uplinked 2 ports from a 40Gb
      & 10Gb switch to the servers.

       

      The 10Gb distributed switch works fine but our 40Gb distributed switch is problematic.

      We are using the tool iPerf to validate the bandwidth between the cluster and an Isilon storage array.

       

      troubleshooting steps:

       

      - iPerf from 1 Isilon node to Isilon node, consistently getting over 35Gb

      - From VM to Isilon Node, iPerf is satisfactorily getting  over 35Gb

      - From Isilon node to VM, iPerf is ranging between 13Gb and 20Gb

      - between  redhat VM's on separate ESXi hosts, iPerf is ranging between 13Gb and 20Gb

      - From VMs on same ESXi hosts, iPerf is getting over 35Gb

      - We checked the firmware of the 40Gb card (MLNX 40Gb 2P ConnectX3Pro Adpt ) and it's at the latest version (2.42.5000).

       

       

      Has encountered a similar issue - or what steps need to be taken for ESXi 6.5 and the mellanox cards (MLNX 40Gb 2P ConnectX3Pro Adpt)