16 Replies Latest reply on May 7, 2013 10:05 PM by ali

    SR-IOV on ESXi 5.1

      Due the fact that vSphere 5.1 now supports SR-IOV, I am searching in the manuals, how to enable the VF in the ESXi 1.8.1 OFED Driver (ConnectX - ConnectX V3)

       

      No luck so far...

       

      Cheers Mike

        • Re: SR-IOV on ESXi 5.1
          ali

          Hi mruepp,

          OFED 1.8.1 for ESX Server does not support SRIOV.

          Please contact us for roadmap and availability.

          • Re: SR-IOV on ESXi 5.1
            siszlai

            Is this also true for the new ConnectX-3 Pro?

            According to this http://www.mellanox.com/related-docs/prod_adapter_cards/PB_ConnectX-3_Pro_Card_EN.pdf

            ConnectX-3 Pro supports SR-IOV and VMWare, but it doesn't mention  which OFED version.

            • Re: SR-IOV on ESXi 5.1

              What other solutions to use RDMA/SDP in multiple Virtual Machines (vSphere) over one single Mellanox card are existing? Is it possible to use the VMXNET 3 for such possibilities? With which OFED Driver?

               

              Thanks,

               

              Mike

                • Re: SR-IOV on ESXi 5.1
                  ali

                  Hi mruepp,

                  vSphere supports several virtualization technologies listed below, you may use option #a.

                   

                   

                  a. vmnic/vmhba Para Virtulization: for networking/storage services, this is supported today using OFED 1.8.1 (or later) for ESX 5.x, Mellanox currently support IPoIB protocol for networking virtulization where you can have vmnic over InfiniBand network (with involves the vSwitch and VMXNET -or other- emulators), as well as SRP protocol for storage virtulization where you can have vmhba over InfiniBand. for more information, check the user manual under "related documents" section under: http://www.mellanox.com/page/products_dyn?&product_family=36&mtag=vmware_drivers

                   

                  b. Full-function DirectPath I/O: where you can give direct access for SINGLE virtual machine to a single HCA adapter:
                  see option #1 under: http://cto.vmware.com/wp-content/uploads/2012/09/RDMAonvSphere.pdf. This is supported today.

                   

                  c. SRIOV DirectPath I/O: where you can give direct access for MULTIPLE virtual machines to a single HCA adapter, will be supported in next OFED release (post OFED 1.8.1) see option #2 under: http://cto.vmware.com/wp-content/uploads/2012/09/RDMAonvSphere.pdf

                   

                  d. vRDMA (RDMA Para Virtulization): this is an experimental project (not available for customers), see option #3 under: http://cto.vmware.com/wp-content/uploads/2012/09/RDMAonvSphere.pdf

                    • Re: SR-IOV on ESXi 5.1

                      Ali,

                       

                      Do you have an idea of the expected timeline for option #c, next OFED release with SR-IOV will be available?

                       

                      I'm attempting to build a virtual lab for development and testing of software for RDMA, and was looking to use SR-IOV with a ConnectX-3 EN Dual 10G adapter on ESXi 5.1 within multiple virtual machines.

                        • Re: SR-IOV on ESXi 5.1
                          ali

                          Hi vanceitc,

                          Please contact Mellanox support for roadmap and availability. I don't have this information.

                          I suggest to use option #b, until we have #c ready for public usage.

                            • Re: SR-IOV on ESXi 5.1
                              oliver8888

                              ali,

                               

                              would you be able to provide guidance for option #b with VMware esxi? specifically known to work HCA/mainboard combo's, and which (if any) /etc/VMware/passthrough.map variables may need to be used beyond defaults?

                               

                              I have a lab, and I have been (unsuccessfully) trying to use option #b for some time. In my attempts to get this to work, I have acquired a number of different cards, and I have tried them across a number of different mainboards in every configuration I can imagine. the mainboards do work with option #b for other hardware devices, but with the mellanox products they will all cause a windows VM which the devices was passed through-to crash/reboot upon driver install.

                               

                              Cards I have (all latest firmware):MCX354A-FCBT, MHQH29B-XTR Rev X1, MHQH29B-XTR Rev A2, MHQH29B-XTR (HP Rebadge).

                               

                              Mainboards currently at my disposal: Supermicro X9SRL-F, TYAN S5510 (S5510GM3NR), (I have tried a couple others too).

                               

                               

                              thank you.

                                • Re: SR-IOV on ESXi 5.1
                                  ali

                                  Hi oliver8888,
                                  What's the FW version you're using?

                                  What's the Windows VM driver version?

                                  Can you please try Linux VM and let us if it works (I know that Linux VM works in the lab, so we can have same reference point).

                                    • Re: SR-IOV on ESXi 5.1
                                      inbusiness

                                      Hi, Ali!

                                       

                                      I was also try Direct through attach configuration but fail.

                                      When I was enabled Intel VT-d on my X8DTN+-FLR then vSphere unable recognize MHQH19-XTC.

                                       

                                      But another MHQH29B-XTR was always ok!

                                       

                                      Big problem is windows vm with QDR HCA (via Intel vt-d) was always BSOD and reboot continuously forever.

                                       

                                      QDR HCA's firmware was 2.9.100 or 2.9.1200 (if available)

                                      vSphere 5.1 update 1

                                      VM hardware 9

                                       

                                      I was read all aspects about vRDMA last years.

                                       

                                      I think vRDMA is best choice and you'll be co work VMware...:)

                                       

                                      SRP is very powerful protocol but have many problem for

                                      extreme solid storage.

                                       

                                      iSER is better choice for virtualized environment.

                                       

                                      I was test SMB 3.0 on my personal lab with Windows server 2012. Performance was good but not a SAN feature and not a standard. That's a very funny for me....:)

                                       

                                       

                                      Anyway....

                                      I was drop SRP, IPoIB, Direct path-through with Intel VT-D.

                                       

                                      If there is a vRDMA experimental project...

                                       

                                      Do you have official plan?

                                      It means co work with VMware that add a new vRDMA feature in ESXi hypervisor.

                                       

                                       

                                        • Re: SR-IOV on ESXi 5.1
                                          justinclift

                                          inbusiness wrote:

                                           

                                          SRP is very powerful protocol but have many problem for

                                          extreme solid storage.

                                           

                                          Out of curiosity, what are the problems with SRP?  Are they just ESX 5.1 and Windows related, or something more fundamental?

                                            • Re: SR-IOV on ESXi 5.1
                                              inbusiness

                                              There isn't a problem SRP itself...:)

                                              SRP was born in many years ago.

                                              performance was good!

                                              But SRP doesn't have authentication step like iSER.

                                               

                                              For example 1:1 connection is very simple.

                                              But multi-path environment was not.

                                              (If all HCAs and TCAs have a 2 port)

                                              I prefer 1:1 port mapping and performance was best!

                                               

                                              There is a very complex configuration by client OS.

                                               

                                              vSphere alway connect with each HCA's port GUID to each TCA's port GUID.

                                              But Windows was not.

                                              (There isn't support anymore...:()

                                               

                                              I want a standard in many different OS environments.

                                               

                                              but there is nothing.

                                               

                                              There is a something about T10 standard.

                                               

                                              That's all...:)

                                               

                                               

                                          • Re: SR-IOV on ESXi 5.1
                                            oliver8888

                                            Hi Ali,

                                             

                                            Thanks for the response.


                                            FW version:” Currently: 2.9.1000 for the ConnectX2 VPI’s and 2.11.0500 for the ConnectX3 VPI’s.

                                             

                                            Windows VM driver version:”  Windows 2008 R2 x64 – OFED 3.2.0 – VM Version 9 (I have tried each OFED software package which correlated to the proper firmware going back for ~2 years at this point).

                                                           

                                             

                                            Can you please try Linux VM and let us if it works (I know that Linux VM works in the lab, so we can have same reference point):Initially I tried Linux (~2 years back) the drivers wouldn’t detect/enable properly and moved to windows as figured the driver support would be better as is usually the case with hardware vendors…I can certainly try again. Can you provide me your Linux Distribution/Version your lab uses? If we are comparing things as close to apples to apples, this would be beneficial.

                                             

                                            On a side note, I feel like your lab should have windows Directpath Test system, as it would be trivial to maintain this. I am itching to try some HyperV/SMB3.0 when I get some more time, and doing so completely virtually would make things much easier.


                                            thank you