11 Replies Latest reply on Dec 2, 2017 3:55 AM by inbusiness

    SR-IOV in ESXi and vSphere 6.5

    agiraud@pyrosun.com

      Hello !

       

      We are evaluating possibilities to upgrade our vsphere 6.0 infrastructure to v6.5 (vCenter and ESXi hosts).

       

      We have a ScaleIO test deployment on ESXi 6.0 hosts, ConnectX-3 (MCX354A-FCBT Firmware 2.4) adapters in Ethernet configuration and SR-IOV for cluster data and storage backend communication, interconnected by SX1036 Ethernet switches.

       

      I am trying to find out if SR-IOV with ConnectX-3 adapters is supported in ESXi 6.5 but was unsuccessful with the native driver in ESXi 6.5: no virtual function available in PCI devices list despite enabling max_vfs seemingly successfully.

       

      Also attempted to remove the native VIBs and install OFED 2.4, but that caused the adapter to disappear completely from the Network Adapter list.

       

      Any idea or advice ?

       

      Thanks !

        • Re: SR-IOV in ESXi and vSphere 6.5
          inbusiness

          Hi!

          Current ScaleIO don't support ESXi 6.5.

          I saw a information new ScaleIO will release to support ESXi 6.5 on ScaleIO community.

          EMC Community Network - DECN: ScaleIO 2.X and Vmware 6.5

           

          Mellanox don't support properly ESXi 6.5 environment, now.

           

          If you don't use RDMA based protocols just wait to launch new ScaleIO for ESXi 6.5 then use ESXi 6.5 inbox Ethernet only driver for build a new ScaleIO configuration.

           

          Have a nice day...:)

            • Re: SR-IOV in ESXi and vSphere 6.5
              agiraud@pyrosun.com

              Hello Jaehoon !

               

              There doesn’t seem to be many scaleout virtual storage solutions which leverage RDMA out there.

               

              The reason for using SR-IOV is to attempt cut latency and I don’t think this is possible with the native inbox 6.5 drivers. An input or feedback from Mellanox would be much appreciated here.

               

              I know another solution would be to add another adapter and pass it through directly to the SDS VMs but this would add significant complexity and costs as additional switches would be needed to accommodate the new ports. Moverover it would be a hardware overkill.

              Guess we will stick to 6.0 for the time being.

               

              Mellanox older SKUs’ support for VMWare always lags behind. This seems a voluntary action to push capex on new equipment because drivers are readily available for newer adapters.

              This is not really encouraging for the existing loyal customer base...

               

              Thanks for your input !

                • Re: SR-IOV in ESXi and vSphere 6.5
                  inbusiness

                  Hi!

                  You are correct!


                  ScaleIO is very powerful HC storage solution.

                  I'm also think about the future of it.


                  If new ScaleIO can support RoCE ethernet, I'll test for our vSphere infrastructure.


                  But physical & virtual(on ESXi based) deployment is very difficult with Mellsnox today.


                  I'll wait a ESXi solid driver support from Mellanox.


                  Jaehoon Choi





                    • Re: SR-IOV in ESXi and vSphere 6.5
                      robertw

                      Is there any progress with the SRV-IOV ? We already have ESXi 6.5 u1 and still do not work: /. In native driver there is still no support for max_vfs (esxcli system module parameters list -m nmlx4_core), test on ConnectX-3 EN:

                      enable_64b_cqe_eqe int     Enable 64 byte CQEs/EQEs when the the FW supports this

                         Values : 1 - enabled, 0 - disabled

                         Default: 0

                       

                      enable_dmfs        int     Enable Device Managed Flow Steering

                         Values : 1 - enabled, 0 - disabled

                         Default: 1

                       

                      enable_qos         int     Enable Quality of Service support in the HCA

                         Values : 1 - enabled, 0 - disabled

                         Default: 0

                       

                      enable_rocev2      int     Enable RoCEv2 mode for all devices

                         Values : 1 - enabled, 0 - disabled

                         Default: 0

                       

                      enable_vxlan_offloads   int     Enable VXLAN offloads when supported by NIC

                         Values : 1 - enabled, 0 - disabled

                         Default: 1

                       

                      log_mtts_per_seg   int     Log2 number of MTT entries per segment

                         Values : 1-7

                         Default: 3

                       

                      log_num_mgm_entry_size  int     Log2 MGM entry size, that defines the number of QPs per MCG, for example: value 10 results in 248 QP per MGM entry

                         Values : 9-12

                         Default: 12

                       

                      msi_x              int     Enable MSI-X

                         Values : 1 - enabled, 0 - disabled

                         Default: 1

                       

                      mst_recovery       int     Enable recovery mode(only NMST module is loaded)

                         Values : 1 - enabled, 0 - disabled

                         Default: 0

                       

                      rocev2_udp_port    int     Destination port for RoCEv2

                         Values : 1-65535 for RoCEv2

                         Default: 4791