56 Replies Latest reply on May 13, 2017 1:47 AM by inbusiness

    Which ESXi driver to use for SRP/iSER over IB (not Eth!)?

    mpogr

      Hi all,

       

      I'm a bit puzzled by the current situation with Mellanox drivers for VMware ESXi. It looks like SRP support is all dusted and won't be developed anymore. Pity, but fine, as long as iSER is there, right?

       

      However, it looks like 2.x packages lack iSER support as well (regardless of where you look, either under VPI or ETH drivers section). IPoIB is there though.

      The 1.9.x packages do include iSER support, but, on the other hand, they lack IPoIB. Which means you can use iSER only in ETH mode.

       

      So, what's the direction? Which drivers is one supposed to use to get iSER (read: fast RDMA storage) work over IB? Are we all supposed to upgrade to the newest Connect-X4 and switches just because they have the same speed both for IB and ETH? What about those people who have FDR IB but only 40GBE or have only IB capable switches?

       

      What I do personally is using 1.8.3 beta for iSER and 1.8.2 for SRP, even with the latest versions of ESXi 6.0. Luckily, they still work, although unsupported, but, the moment they break, what am I supposed to do?

       

      Thanks!

        • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
          galbitz@albitz.biz

          I have the same question as well. The drivers for connectx-4 have never really materialized for vmware and infiniband storage.

          • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
            david

            This is one of the reason I have given up on infiniband for our storage, dropping protocol support and dropping adapter support because newer versions appear. Just not worth it anymore.

            • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
              mpogr

              Will we have the honour of someone from Mellanox responding to this one? They seem to be monitoring these forums pretty closely, so no response so far looks a bit strange...

              • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                inbusiness

                Mellanox SRP initiator in vSphere OFED 1 8.2.4 shows me a good performance.

                But it has a issue with ZFS SRP target that cause problem to VM auto start function in ESXi host.

                SRP is a native RDMA SCSI protocol that light-weight storage protocol and~ very old ancient one.

                 

                Mellanox iSER initiator in vSphere OFED 1.8.3 shows me a good performance.

                IPoIB iSER initiator can support properly my ZFS iSER target that can support VM auto start function in ESXi host perfectly.

                 

                But above 2 of protocols in ESXi 6.x will be a journey to PSOD world.

                 

                IPoIB iSCSI was a bad performer in history.

                They show me a only 450~650MB/s throughput with 2port ConnectX-2 HCA between ZFS IPoIB iSCSI target.

                AND~ very high processor usage level will let you go to tour the Andromeda Galaxy above your roof!

                 

                I decide to move IPoIB iSCSI with vSphere OFED 2.4.0 and try SR-IOV function in ESXi 6.x environment.

                Because vSphere OFED 2.4.0 can support SR-IOV and both IB and ETH mode or VPI.

                 

                vSphere OFED 2.4.0 show me a big improvement in IPoIB iSCSI performance with 2 of MHQH19B-XTR QDR HCAs.

                Performance level was over 2GB/s in peak time...

                But not like as RDMA protocol...

                 

                01.Mellanox vSphere OFED 2.4.0 iometer test in IPoIB iSCSI with 2 of MHQH19B-XTR and OmniOS ZFS iSER Target

                 

                Also SR-IOV support was impressive.

                Absolutely, ConnectX-2 was EOL product and I build a custom configuration firmware with binary file version 2.9.1314 from IBM site to support SR-IOV.

                 

                Here is a result.

                 

                02. SR-IOV VF list in MHQH19B-XTR ConnectX-2 HCA

                 

                02. vSphere 6.0 Update 2 Build 4192238 PCI device lists

                03. Windows Guest RDMA communication test

                 

                Mellanox show me a good product concept and also many functions in their brochure.

                But it's almost impossible to get a good manuals and best practice guide like Dell or etc.

                 

                So many failures make a above test result.

                 

                Unstable driver!

                Unstable firmware!

                Undescripted driver options!

                 

                For what?

                 

                RDMA is a good concept and RDMA NIC show a excellent performance!

                Mellanox said their product can support almost major OS environment.

                 

                But there ware always exist many bugs and limits in almost OS environments.

                 

                Latest product ConnectX-4,5 also useless in vSphere and Hyper-V environment, too.

                 

                It's almost EOL of vSphere 5.x and Mellanox show a beta level driver and laziest support for vSphere environment in historically.

                 

                vSphere OFED 1.8.3 beta IPoIB iSER initiator have a critical bug that was show in public some years ago.

                I think vSphere OFED 1.8.3 beta IPoIB iSER initiator poor quality, tricky SRP initiator modification for experimental!

                Everybody can find that how it was a critical that issue can find in resolved issue in vSphere Ethernet iSER dirver 1.9.10.5 release notes.

                 

                I'm waiting VMWorld in Aug. 2016, and expect vRDMA nic driver in new VMwareTools and stable IPoIB iSER initiator support.

                 

                I'll make a decision in near future to move or not...

                • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                  jasonc

                  This is mostly exactly the current question I am trying to answer for myself.

                  We have a few ESXi hosts currently on v5.5 with dual port VPI adapters.

                  We want to run SRP/iSER on one port back to our SAN and we want to run IPoIB on the other port for vMotion communications.

                  We don't want to go to Ethernet based adapters and/or have to buy managed switches to achieve this.

                  Ultimately we don't want to implement this and then find it won't run of v6.x either so forwards compatibility is important also.

                  • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                    mpogr

                    7 days later and no reply from Mellanox... Seriously?

                    • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                      mpogr

                      Looking at the current driver selection across different OS/cards/medium (IB vs. ETH) it looks like the only space consistently supported by Mellanox is Linux. Indeed, for Linux you have everything:

                      • every card is supported (all the way from Connect-X2 to Connect-X5)
                      • IB and ETH and the possibility to switch from one to another for the cards that support it (VPI), and you can even use IB on one port and ETH on another
                      • iSER initiator is supported across the board and iSER target is supported with both LIO and SCST
                      • SRP initiator is supported across the board and SRP target is supported with SCST

                       

                      So, if you use KVM as your hyperwisor , there is no problem.

                       

                      However, if you want to use Mellanox IB technology in conjunction with currently the most popular hyperwisor (VMware ESXi), you're in trouble:

                      • there is no official support for ESXi 5.5 and up for any card older than Connect-X3
                      • the only VPI cards supported in IB mode are Connect-X3/Pro
                      • Connect-IB cards are not supported at all
                      • Connect-X4 cards are supported only in ETH mode
                      • dual-port VPI cards support only the same protocol (IB or ETH) on both ports, not a mix
                      • SRP initiator is no longer available
                      • iSER initiator is available only with 1.9.x.x drivers only over ETH and only for Connect-X3/Pro cards
                      • the current IB driver 2.3.x.x is compatible only with ESXi 5.5 (not 6.0!), works only with Connect-X3/Pro cards and includes neither SRP nor iSER initiator

                       

                      My question is very simple: what's the long term strategy of Mellanox with regards to hyperwisor support? Are they suggesting that everyone considering Mellanox products should switch to KVM as their hyperwisor of choice? Or they should abandon RDMA and use Mellanox adapters/switches only as 56/100Gbe network infrastructure?

                       

                      I would REALLY appreciate some reaction from Mellanox staff, who no doubt have already seen this thread, but, for some reason, chose not to react to it...

                        • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                          inbusiness

                          KVM?

                          No! KVM also have many limitations.

                          For example EoIB, etc.


                          Infiniband communication rely on SM

                          - Subnet Manager


                          SM consist of some components and API.

                          But SM architecture didn't designed for hypervisor world.


                          Historically many problems were exist in vSphere environment.


                          1st. ESXi

                          When vSphere 4.x age VMware give us two choice.


                          ESX and ESXi

                          ESX consist hypervisor and OEMed Red Hat console.

                          ESXi consist hypervisor only.


                          Some IB tools didn't work in ESXi host in my experience. But that IB tools work in ESX host nicely.


                          ESXi isn't a general purpose kernel.

                          I think it cause a major IB driver porting problem.


                          2nd. Infiniband design itself!!!

                          Hypervisor control all communication guest vm and host network. RDMA have a kernel by-pass feature called zero copy or RDMA R/W.


                          This feature controlled by SM, but add hypervisor to this network, many complex modification must add to SM and IB APIs.


                          There isn't IBTA standard yet.

                          This will be stadardize in near future.


                          3rd. RDMA storage protocol.

                          Infiniband specification concern all of RDMA and ULP protocols.


                          The improvement of Linux OFED is very fast.

                          No one know Which OFED version will be port to latest ESXi version.


                          Many complexity exist and must resolve issues in ESXi environment.


                          iSER also good candidate for ESXi RDMA protocol, but some critical problem exist.


                          I think that we must check the latest Linux OFED release note that include so many bugs, limitations.


                          Linux is a good platform but also suffer from IB's unique limitations.


                          Conclusion.

                          I think IB is fastest and eiffient high speed protocol in the planet. But not ready to enterprise network environment yet.


                          Mellanox said in their product brouchur that they can support major OS environments.


                          But many case wasn't.

                          Beta level driver, manual, bug, limitation, etc.


                          Absolutely!

                          Many time later all problem will be overcome with new standard and product.


                          But not now...






                        • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                          mpogr

                          Two months later and still no reaction from Mellanox. They post lots of information here, mainly about new products, but what about taking care of existing products/customers?

                           

                          I have seen the 2.4.0 drivers published under VPI/VMware driver section, but these are not new drivers, seems like a mistake corrected (these drivers previously appeared under ETH/VMware section). They do add official support for ESXi 6.0, but still no SRP/iSER. So, effectively, they allow using ConnectX-3/Pro VPI adapters as NICs over IB fabric (using IPoIB), but that's it. There seems to be still no support at all(!) for ConnectX-IB and ConnextX-4 VPI adapters, which I find borderline insulting.

                           

                          Any chance to get ANY reply from Mellanox staff?

                            • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                              inbusiness

                              I heard launching about new vSphere 6.5 yesterday.

                               

                              I'm also heard that Mellanox will drop a support Infiniband iSER on ConnectX-3 and belows from some source that I can't remember exactly now.

                               

                              I give up using infiniband then change all of SX6036G switch's operation mode to Ethernet.

                               

                              All ULPs were unstable on vSphere 6.0 Update 2 even vSphere 6.0 OFED 2.4.0, too.

                               

                              Mellanox switch systems show me a good performance and low power consumption, but also show me a low satisfaction with CLI configuration and functions unlikely Dell, Arista and etc's ones...:(

                               

                              I will wait to public distribution of vSphere 6.5 then I will make a decision that Mellanox or not.

                               

                              They show a horrible software support and quality every time.

                               

                              I'm very disappointed it.

                               

                              But I'm also heard about industry 1st support for VRDMA in vSphere 6.5.

                               

                              Keep wait to vSphere 6.5.

                            • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                              inbusiness

                              What's your RDMA Storage target and OS?

                              I prefer Solaris family ZFS COMSTAR but ESXi 6.0 and above won't work properly.

                               

                              Best Regards.

                              • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                mpogr

                                OK, 4 moths later, ESXi 6.5 released, and still no reply from Mellanox. As expected, previously working workarounds (uninstalling inbox drivers and installing 1.8.x.x) for SRP/iSER over IPoIB no longer work. I even attempted using 1.9.10.5 ETH drivers with my Connect-X3s and employing iSER, but, still, this didn't work either. Which means I have to stick to ESXi 6.0, possibly, until I decide to EoL my Mellanox hardware altogether.

                                 

                                Mellanox: what's your response to all this? Will there ever be a solution for RDMA storage for VMware, or you just decided it's not worth the effort?

                                  • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                    inbusiness

                                    Hi!

                                    ESXi 6.5 include RDMA support, but there is several bugs.

                                    ESXi hypervisor embedded RDMA support that coworks VMware and Mellanox, but there are many advance will be required.

                                     

                                    I think almost 2~3 years later they will support iSER and vRDMA properly.

                                    Therefore ConnectX3,4,5,6 will be deprecated then new generation HCA will support it.

                                    Mellanox always said that their product support everything, but only Linux support.

                                     

                                    Jaehoon Choi

                                      • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                        galbitz@albitz.biz

                                        Does the included ESXi 6.5 have RDMA support for the hypervisor back to storage? or is it just for a guest. I really am only using SRP at a HOST to Storage level, nothing inside of the vm is infiniband aware.

                                         

                                        Mellanox, if you arent going to release SRP drivers again, would you consider releasing the source for the old ones and we will take it from here..

                                          • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                            mpogr

                                            Section 4.3 of this article suggests that in-box ESXi drivers are RDMA-aware,  which you can also conclude from the very fact one of the modules related to these drivers is called nlmx4(5)_rdma. That doesn't mean though that RDMA is actually going to be effectively used for host-to-storage access. For iSCSI, that would require implementing iSER layer, which the in-box driver clearly doesn't have (after all, it's explicitly called "Software iSCSI Adapter" and it's like 100 years old). And, if VMware NFS could utilise RDMA, I'm pretty sure they'd already told us about it.

                                            So, someone has to do some work here, either Mellanox or VMware or both. But they're not eager to tell us anything, as you can clearly see from 1/2 year lack of response to this thread...

                                              • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                jham

                                                Have any of you tried the "esxcli rdma iser add" command? I tried it on one host, it didn't generate any output but this was logged in dmesg log:

                                                 

                                                2017-01-13T08:05:35.347Z cpu17:67870 opID=3f4390b7)World: 12230: VC opID esxcli-f2-cc96 maps to vmkernel opID 3f4390b7

                                                2017-01-13T08:05:35.347Z cpu17:67870 opID=3f4390b7)Device: 1320: Registered device: 0x43044f391040 logical#vmkernel#com.vmware.iser0 com.vmware.iser (parent=0x1f4943044f3912b0)

                                                 

                                                I'm using native drivers on a clean esxi 6.5 install. ConnectX-3 10GbE nics (MT27520). My hope was that there are plans to add native iser support but might just be wishful thinking?

                                                 

                                                VMware vSphere 6.5 Documentation Library

                                                  • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                    inbusiness

                                                    Hi!

                                                    I saw a some link and VMworld 2016 USA INF 8469 iSCSI/iSER Hardware SAN Performance over the Converged Data Center on Youtube that was disappear on now...:(

                                                    This video show me a some future direction to iSER support on ESXi in future and presenter were two people from VMware & Mellanox.

                                                     

                                                    iSCSI Adapter number rule was change on ESXi 6.5 like vmhba64.

                                                     

                                                    I guess some plan was started and testing process also on going that to support Native iSER initiator support on ESXi 6.5 + Update @ in 2017 or next years.

                                                     

                                                    Just stand!

                                                     

                                                    Jaehoon Choi

                                                    3 of 3 people found this helpful
                                                      • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                        mpogr

                                                        This is definitely encouraging, but, clearly, doesn't look like a finished piece of work... Any chance to get an answer from mellanox-admin arlonm?

                                                          • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                            inbusiness

                                                            Hi!

                                                            I'm also have many questions about support iSER in ESXi environment.

                                                            Current RDMA support on ESXi is a half on VM guest OS - JUST SUPPORT LINUX GUEST ONLY - and some vRDMA functions conflict against  Physical RDMA Hardware functions, some driver's name space and some bug is critical.

                                                             

                                                            I think also Mellanox and VMware focus on RDMA VM network for VM guest (vRDMA), not a RDMA Storage protocol now...

                                                             

                                                            There is a difference between ESXi and KVM.

                                                            This difference cause a problem driver and storage protocol porting to ESXi.

                                                            ESXi hypervisor embedded drivers for VM guest, but native RDMA, vRDMA for VM guest now.

                                                             

                                                            I'm also wait very long time that Mellanox support STABLE RDMA Storage protocol and SRIOV on ESXi environment.

                                                             

                                                            But not now.

                                                          • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                            galbitz@albitz.biz

                                                            I Hope this slide is real and they are actively working on it, while I would rather see  SRP support, I would def settle for an ISER storage target solution at this point. My guess is they are getting too caught up with the Guest aspects of this. I would assume that is the more complicated solution vs host level storage connectivity. At the moment I just use SRP for storage connectivity, the guests are unaware. Curious if anyone here is doing guest level infiniband or if it is just for the host.

                                                            • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                              jasonc

                                                              For all of those who would like to watch the full presentation "INF 8469 iSCSI/iSER Hardware SAN Performance over the Converged Data Center" you can go to the url listed below:
                                                              VMworld_com Breakout Sessions - Playback Library  https://www.vmworld.com/en/sessions/index.html

                                                               

                                                              On this page under the splash is a link "View the Breakout Sessions" where you can register and get free access to view content from VMworld 2016.  If you search on INF8469 you can locate the breakout session and watch the complete recording.

                                                                • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                  jasonc

                                                                  I found the session interesting and informative but it still doesn't answer the question "What About IB"?, but it does look like iSer is still being developed and will return.

                                                                   

                                                                  My take-away was that you still require ROCE compatible adapters to get this to work.  The the minimum requirements discussed were CX-3 Pro adapters and I guess a 40G Ethernet switch.

                                                                  Realistically this probably means a Mellanox managed IB switch (because there aren't a huge number of 40G switch choices) which when all added up puts it way out of my budget.

                                                                   

                                                                  (although apparently CX-2 adapters can do ROCE at 10G)

                                                                  If iSer was possible with this it might make it a little closer to my budget as I have these cards already and a 10G switch is possible.

                                                                  2 of 2 people found this helpful
                                                    • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                      jasonc

                                                      Looks like the difference between CX-2,CX-3 and CX-3Pro,CX-4 is RoCEv1 vs. RoCEv2...

                                                      I found the following page and its related pages informative as to why RoCEv1 vs. RoCEv2...RoCE v2 Considerations

                                                       

                                                      Although RDMA over Converged Ethernet - Wikipedia  explains it the simplest:

                                                      RoCE v1 is an Ethernet link layer protocol and hence allows communication between any two hosts in the same Ethernet broadcast domain. RoCE v2 is an internet layer protocol which means that RoCE v2 packets can be routed

                                                      Further to this, even more interesting is "RoCE versus InfiniBand" topic on the same page...

                                                       

                                                      So the takeaway from both of these articles when put together seems to be that while InfiniBand using RDMA(and as far as ESXi storage is concerned SRP) may be better than RoCEvX(and as far as ESXi storage is concerned iSER) and the fact that RoCE runs on Ethernet means those who are trying to leverage RDMA based storage protocol have a cheaper entry point because they already have Ethernet technology.(problem is this is not always true as most iSER is done on 10G and 40G Ethernet which the user may or may not have an existing investment in)

                                                      Also the fact that RoCEv1 suffers from latency issues associated with Ethernet (at L1/L2) in larger networks means that RoCEv2 (at L3) is the preferred platform as it mitigates the latency with a congestion protocol and gets closer to InfiniBand style performance.(hence the reason SRP still seems to out performs RoCEv2 at the same speed but all the RoCE development is on RoCEv2 going forward)

                                                       

                                                      So I think until I can afford CX3 Pro adapters and associated switching the best option is to stay with SRP (which will work with ESXi v5.5/6.0 and my existing Infiniband investment) but the associated move to ESXi 6.5 may need to necessitate far more planning and going CX-3Pro and RoCEv2 + iSer.

                                                       

                                                      I guess the question still boils down to why no SRP on ESXi 6.5 and I think looking at the whole picture its simply a case of economics and demand (and when you have to foot development costs that's an important consideration - what can you make revenue from) Also more wider development (read here development for current platforms like ESXi6.5) is happening on iSer because its based on Ethernet and not Infiniband at a physical level and maybe more tech companies see a wider benefit for them because they have Ethernet but not Infiniband technology.  Also these developers may have more iSCSI experience than IB experience which means they can maybe leverage existing knowledge to move forward.  Don't get me wrong I still think Infiniband is great but I think despite SRP getting to where it has, long term Infiniband will remain a HPC tool and RDMA based storage with go iSer/RDMA Direct depending on whose camp you settle in.

                                                        • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                          jasonc

                                                          And in my usage scenario there is another part to this problem.  I also need Windows connectivity to the same storage as ESXi.

                                                           

                                                          Problem is with Server 2012 there is no SRP/iSer functionality at all because MS are focussing all their efforts onto SMB Direct (SMB3 over RDMA). (the problem with this is that it won't talk to anything non MS which I guess is part of their plan also)  There is however legacy SRP functionality for Server2008 so that solves the issue for the short term and also confirms the implementation of SRP is right solution for the right time.

                                                           

                                                          There is however at the moment no iSer functionality for Server 2008 (and not likely to ever be because its too old) and none for Server 2012 (because MS haven't done it and are not interested in doing it) - BUT (and its a big but) I believe there is work on an iSer client for Windows 2012 by a third party but I have no idea how its going.

                                                        • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                          jasonc

                                                          Also finally in another side issue to this (but still relevant) what are people using on the Initiator side for SRP/iSer?

                                                           

                                                          Now I am well aware you can run up a Linux distribution of your choice(again what are people using here) and install SCST or LIO on it but I am looking for a more 'appliance like' solution that has been tested. (as I need to get this all up and running ASAP)  ESXi was always very picky about what iSCSI initiator it would work well with.

                                                           

                                                          The best thing I have been able to currently find is ESOS - Enterprise Storage OS which is exactly the sort of thing I am looking for but I am interested in comments on how well it works or alternatives as I need to make the right choice first time and as quickly (and cheaply) as possible since its not a long term solution. (don't want much do it )

                                                           

                                                          I don't need any management of the underlying disk storage (HDD and SSD) as I have hardware raid to handle that. (so as much as ZFS may be a good option I am not going there as it doesn't suit the hardware I currently have from what I can work out)

                                                           

                                                          I guess with the lack of SRP activity in the industry there are aren't a huge number of choices.

                                                            • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                              inbusiness

                                                              Correct!

                                                               

                                                              I think Mellanox waiting for EOL of CX-3...

                                                              - Not CX-3 Pro

                                                              CX-3 don't have a capability VXLAN offload support.

                                                              In the near future, RoCE v2 will be a dominant in Ethernet iSER therefore Mellanox drop to support for CX-3.

                                                               

                                                              Another issue is don't support Solaris COMSTAR for ZFS.

                                                               

                                                              Mellanox vSphere OFED for ESXi 5.x support Solaris COMSTAR properly in historically with Infiniband.

                                                              But now, they don't support Solaris COMSTAR on ESXi 6.0 or above.

                                                               

                                                              I don't know what's the problem caused from ...

                                                            • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                              mpogr

                                                              I think we have every right to ask for clarifications from Mellanox at this stage. ophirmaor?

                                                                • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                  jasonc

                                                                  So just summarizing the options:

                                                                   

                                                                  Driver VersionStorage ProtocolAdapter ModeAdapter FamilyVMware Ver Supported
                                                                  1.8.2.5IPoIB+SRPVPI OnlyCX-2ESXi5.5,ESXi6.0
                                                                  1.8.3betaIPoIB+iSerVPI OnlyCX-2,CX-3 ProESXi5.5,ESXi6.0
                                                                  1.9iSerEN OnlyCX-3 ProESXi5.5,ESXi6.0
                                                                  2IPoIB iSCSIVPI OnlyCX-2,CX-3 ProESXi5.5,ESXi6.0
                                                                  3native iSCSIEN OnlyCX-3 ProESXi5.5,ESXi6.0,ESXi6.5
                                                                  4native iSCSIEN OnlyCX-4ESXi5.5,ESXi6.0,ESXi6.5

                                                                  (see later post with corrected chart)

                                                                   

                                                                  IPoIB is TCP/IP in software only (no RDMA) on VPI only
                                                                  No support for any CX-2 solutions

                                                                   

                                                                  So since I don't have a full complement of CX-3 Pro everywhere and only an un-managed  IB switch I would be best to stick with 1.8.2.5 on SRP under ESXi6.0

                                                                  Also since I need windows storage support I would be best to stick with SRP on 2008R2 since there is no iSer support on windows server. (and no SRP support after 2008R2)

                                                                   

                                                                  I will stick with ESXi 6.0 currently (probably wouldn't be moving to 6.5 yet anyway) but when I do it looks like I will need to replace the 40G IB switch with an EN switch and fill out my storage network with CX-3 Pro to get iSer support.  Also I would need to hopefully find an iSer driver for Server 2012R2 (and/or 2016)

                                                                   

                                                                  Is there any way with ESXi to use the CX-2 as a 10G ethernet adapter {with appropriate QSFP to SFP adapter} or is there no support at all on ESXi 6.0 (or 6.5).

                                                                    • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                      mpogr

                                                                      Some corrections to your list:

                                                                      * 1.8.2.4 is for 5.x, 1.8.2.5 is for 6.0

                                                                      * 1.8.3 beta supports both SRP and iSER and can be forcibly installed on 5.x and 6.0

                                                                      * All 1.8.x.x support X2, X3 and X3 Pro. Also, they're the last to support X2. Not sure about the EN mode, haven't tried it.

                                                                      * 1.9.x, 2.x and 3.x support only X3 and X3 Pro, none support X2 or older.

                                                                      * 1.9.x and 3.x support only EN. 1.9.x is the only one supporting iSER.

                                                                      * 2.x is the latest supporting the IB mode (only for X3 and X3 Pro). It may also support the EN mode, but I haven't tested it.

                                                                      * Connect-IB, X4 in the IB mode and X5 aren't supported at all - I think this one is particularly insulting, because it means even relatively new cards are left without ESXi support.

                                                                       

                                                                      I think you're absolutely right with your conclusion to stick with 1.8.2.5 on SRP under ESXi 6.0, this is exactly what I intend to do, even though I have X3 (not Pro) across the board and a managed IB/EN switch. Theoretically, I could use the 1.9.x in the EN mode (still on ESXi 6.0) over iSER, but performance wouldn't be on the same level as SRP and it wouldn't allow me to move to ESXi 6.5 anyway. I don't need any Windows support, my only storage client are ESXi hosts.

                                                                       

                                                                      As for using X2 as 10Gb NICs, I think this is how they're recognised by the inbox ESXi 6.0 drivers (although not 100% sure). You can give it a shot.

                                                                      1 of 1 people found this helpful
                                                                        • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                          jasonc

                                                                          Ok I think I got all those corrections (might help someone later looking this up)

                                                                           

                                                                          Driver VersionStorage ProtocolAdapter ModeAdapter FamilyVMware Ver SupportedNotes
                                                                          1.8.2.4IPoIB+iSCSI, SRPVPI Only,?ENCX-2,CX-3,CX-3 ProESXi5.x
                                                                          1.8.2.5IPoIB+iSCSI, SRPVPI Only,?ENCX-2,CX-3,CX-3 ProESXi6.0
                                                                          1.8.3betaIPoIB+iSCSI, SRP, IPoIB+iSerVPI OnlyCX-2,CX-3,CX-3 ProESXi5.1(ESXi5.5 and 6.0 forced)
                                                                          1.9iSCSI, iSerEN OnlyCX-3,CX-3 ProESXi5.5,ESXi6.0
                                                                          2IPoIB+iSCSIVPI,?ENCX-3,CX-3 ProESXi5.5,ESXi6.0
                                                                          3iSCSIEN OnlyCX-3,CX-3 ProESXi5.5,ESXi6.0,ESXi6.5
                                                                          4iSCSIEN OnlyCX-4ESXi5.5,ESXi6.0,ESXi6.5
                                                                          • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                            inbusiness

                                                                            If you use vSphere OFED 1.8.2.4, 1.8.2.5, 1.8.3.0 on ESXi 6.0  with Solaris

                                                                            COMSTAR SRP, iSER target then you will meet the ESXi PSOD.

                                                                             

                                                                            ESXi 6.0 support Linux target only.

                                                                             

                                                                            Jahoon Choi

                                                                            2017년 1월 30일 (월) 19:03, jasonc <community@mellanox.com>님이 작성:

                                                                             

                                                                            Mellanox Interconnect Community

                                                                            <https://community.mellanox.com/?et=watches.email.outcome>

                                                                            Which ESXi driver to use for SRP/iSER over IB (not Eth!)?

                                                                             

                                                                            Jason Cecchin

                                                                            <https://community.mellanox.com/people/jasonc?et=watches.email.outcome>

                                                                            marked mpogr

                                                                            <https://community.mellanox.com/people/mpogr?et=watches.email.outcome>'s

                                                                            reply on Which ESXi driver to use for SRP/iSER over IB (not Eth!)?

                                                                            <https://community.mellanox.com/thread/3379?et=watches.email.outcome> as

                                                                            helpful. View the full reply

                                                                            <https://community.mellanox.com/message/7767?et=watches.email.outcome#comment-7767>

                                                                             

                                                                              • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                                mottibeck

                                                                                 

                                                                                I would like to clarify the ESXi over InfiniBand (IB) support topic and sorry for being late to the thread.

                                                                                 

                                                                                IB Para-virtualization (PV) and SR-IOV are supported only in ESXi versions that support the VMKLinux driver, which means in version ESXi 6.0 or older. In those versions, the IPoIB standard protocol has been implemented. In ESXi 6.5 (that
                                                                                supports only native driver), Mellanox plans to add IB over SR-IOV support in June’17.

                                                                                 

                                                                                Re. ESXi’s Storage protocols:

                                                                                • SRP runs only over IB and the latest driver that included new features was 1.8.2.3 over ESXi 5.5 using ConnectX-2, ConnectX-3 & ConnectX-3 Pro. Since then, the SRP driver is in a maintenance mode (means, Mellanox will only fix issues).

                                                                                 

                                                                                • iSER support comes only over RoCE and ESXi 6.5 includes inbox Mellanox’s TCP/IP & RoCE drivers over all speeds, 10, 25, 40, 50 & 100 Gb/s and run over ConnectX-4 Lx and ConnectX-4. ConnectX-5 support will be added later this year.

                                                                                 

                                                                                1 of 1 people found this helpful
                                                                                  • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                                    inbusiness

                                                                                    Okay!

                                                                                    I want to hear it.


                                                                                    ConnectX-3 family will be drop to support RoCE iSER on ESXi 6.5 and above...:(


                                                                                    My friend also test with MCX354A-FCBT on ESXi 6.5 via esxcli rdma iser add command.


                                                                                    But failed to create iSER initiator.

                                                                                    • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                                      jasonc

                                                                                      Thanks for the reply.

                                                                                       

                                                                                      So just clarifying a few things:

                                                                                      • So is "IB Para-virtualization (PV)" is actually "Paravirtualization of IP over Infiniband" otherwise known as (IPoIB).  This is the tool which allows us to run IP based applications on an IB fabric.
                                                                                      • The SR-IOV features allow a physical NIC to be shared in a virtual environment.

                                                                                       

                                                                                      So if you look at my previous table/chart there was support up to ESXi6.0 for most protocols and features because these used the VMKLinux device driver model.

                                                                                       

                                                                                      For those who don't know VMKLinux was a carry over from the old ESX (which required Linux) days to allow Linux device drivers to be essentially still used with ESXi (with a few mods) - even through in ESXi Linux doesn't technically exist.  In ESXi5.5 VMware released a new "Native Device Driver Model" where the drivers are directly interfacing with the VMKernel now rather than going through the VMKLinux shim compatibility layer. (VMware released the new device driver model so drivers can be more efficient, flexible and have better performance.  In addition there is more debugging and troubleshooting features along with support for new capabilities such as hot plugging - see  more info here https://blogs.vmware.com/tap/2014/02/vmware-native-driver-architecture-enables-partners-deliver-simplicity-robustness-performance.html)

                                                                                       

                                                                                      So you mentioned that SRP protocol support has been retired and won't be carried forward with the new driver model so I am assuming we won't be able to use it on ESXi v6.5 and later versions of ESXi.

                                                                                       

                                                                                      Also stated was that ESXi v6.5 includes drivers for CX-4 inbox (and CX-5 drivers are coming later this year) and that these drivers support ROCE and iSer when adapters are run in ethernet mode on an ethernet switch only.

                                                                                       

                                                                                      Question still remaining:

                                                                                      • Because of the new native drivers included inbox for ESXi v6.5 the older VMKLinux drivers will not work any longer.  Can we disable or remove these drivers and continue to use the older VMKLinux drivers similar to what is described here https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2147565?
                                                                                      • Will there be any support for CX-3 and CX-3Pro adapters under the new Native device driver model for ESXi v6.5 and later? (and therefore support for iSer with CX-3 and CX-3Pro like you provide for CX-4 now (and for CX-5 in the future)?
                                                                                      • Will there be any support for IPoIB on the new Native device driver model or is this not being carried forward either?

                                                                                       

                                                                                      Because the way it looks at the moment is if we want to run RDMA accelerated storage on ESXi v6.5 we will need to purchase CX-4 or later adapters (currently only CX-4) and run them on an ethernet switch, otherwise we will be stuck at ESXi v6 due to the lack of CX3 drivers under the new device driver model and inability to use the old ones.

                                                                                        • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                                          galbitz@albitz.biz

                                                                                          What does it take to enable roce in esxi 6.5 with a connectx-4 card? I am guessing iscsi is the default vs iser and there may be additional config needed

                                                                                          • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                                            erezscop

                                                                                            Hi Jason,

                                                                                            Sorry for not answering sooner, here are answers to your questions, let me know if you have any further questions.

                                                                                             

                                                                                            Jason wrote:

                                                                                            Also stated was that ESXi v6.5 includes drivers for CX-4 inbox (and CX-5 drivers are coming later this year) and that these drivers support ROCE and iSer when adapters are run in ethernet mode on an ethernet switch only.

                                                                                            [ES] RoCE yes. iSER is not part of Mellanox native driver. iSER is under evaluation with vmware for future ESXi releases either as an async driver or inbox native driver.

                                                                                             

                                                                                            Jason wrote:

                                                                                            Because of the new native drivers included inbox for ESXi v6.5 the older VMKLinux drivers will not work any longer.  Can we disable or remove these drivers and continue to use the older VMKLinux drivers similar to what is described here https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2147565?

                                                                                            [ES] vmware no longer supports vmklinux with ESXi 6.5, hence Mellanox have not verified the use of vmklinux drivers with ESXi 6.5.

                                                                                             

                                                                                            Jason wrote:

                                                                                            Will there be any support for CX-3 and CX-3Pro adapters under the new Native device driver model for ESXi v6.5 and later? (and therefore support for iSer with CX-3 and CX-3Pro like you provide for CX-4 now (and for CX-5 in the future)?

                                                                                            [ES] ConnectX-3 and ConnectX-3 Pro have been added to the inbox native driver support with ESXi 6.5.

                                                                                             

                                                                                            Jason wrote:

                                                                                            Will there be any support for IPoIB on the new Native device driver model or is this not being carried forward either?

                                                                                            [ES] InfiniBand (IB) is not supported by vmware. Previously, IB was supported with the use of Mellanox's vmklinux driver. With the new native drivers we cannot add IB support, and it is up to vmware to add such support. That said, Mellanox will release in the upcoming June-July version support for InfiniBand over SR-IOV for ConnectX-4 and ConnectX-5 (note that management of ESXi is still via Ethernet, so the server will require both Ethernet and IB connectivity). With IB over SR-IOV you may use SRP, iSER or IPoIB in the guest OS.

                                                                                             

                                                                                            Erez

                                                                                              • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                                                inbusiness

                                                                                                I understood.

                                                                                                Thank you Erez for your exact answer.

                                                                                                 

                                                                                                Conclusion:

                                                                                                1st) vNIC must use a conventional Ethernet or RoCE with supported Ethernet switch.

                                                                                                2nd) SRP, iSER, IPoIB based IPoIB protocol support only SRIOV in GuestOS

                                                                                                 

                                                                                                Therefore VMware admin must use minumum 2 sererated HCA per ESXi host for Ethernet & SRIOV based IPoIB Protocol.

                                                                                                Is it right?

                                                                                                 

                                                                                                Jaehoon Choi

                                                                                                • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                                                  mpogr

                                                                                                  Hi Erez,

                                                                                                   

                                                                                                  If I understand your replies correctly, you're basically saying this: we (Mellanox) used to provide ESXi drivers using certain driver model (vmklinux) that VMware no longer support, hence we're not going to support any of our hardware functionality other than plain Ethernet.

                                                                                                  The problem here is that VMware have been promoting their new driver architecture for quite some time and allowed coexistence of the new and old driver models for a while, so vendors like Mellanox would be able to develop new drivers using the new architecture. However, it looks like Mellanox chose not to bother with implementing the full driver capabilities into the new model, which kind of shows lack of commitment to the ESXi platform. For some reason, this hasn't prompted a massive revision of Mellanox marketing and promotional materials that still clearly state full ESXi support, failing to mention that some of the very significant features of its product lines (e.g. IB support altogether) are no longer supported on ESXi. I call this misleading marketing/borderline false advertising!

                                                                                                   

                                                                                                  Please, don't try to hide behind SR-IOV! We bought Mellanox products to provide fast storage for ESXi datastores, so SR-IOV doesn't help us at all!

                                                                                                   

                                                                                                  Any further clarifications?

                                                                                                    • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                                                      inbusiness

                                                                                                      I think so,too.

                                                                                                      I can't understsnd Mellanox's ESXi driver support during 6 years.

                                                                                                      Whenever new driver release then some feature was disappeared everytime.


                                                                                                      I'll ask this situation to VMware community,too!


                                                                                                      • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                                                        erezscop

                                                                                                        Hi,

                                                                                                        I'm sorry you understand it this way, Mellanox is actually dedicated very much to ESXi, and working closely with vmware on a day to day basis. The new native driver has limitations compared to the vmklinux driver, it simply does not allow us to support infrastructure that we could before such as IB and iSER. Vmware are aware of the drawbacks of the move from vmklinux, and we are working with the limitations at hand. As stated above we are trying to bypass the limitations currently set by offering SR-IOV solutions, while discussing with vmware future support for IB and iSER. I hope to bring news in the near future.

                                                                                                         

                                                                                                        Erez.

                                                                                                          • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                                                            inbusiness

                                                                                                            To Erez

                                                                                                            SR-IOV was aimed to support low latency network on GuestOS.

                                                                                                            SR-IOV vNIC bypass hypervisor kernel then direct access to physical switch from VM.

                                                                                                            Not for Storage protocol orginally.

                                                                                                             

                                                                                                            Why do you think SR-IOV is solution for this old aged ESXi driver support problems?

                                                                                                            I saw a multiple IPoIB vNIC, SRP initiator, Ethernet Support with single HCA port on vSphere 4.x.

                                                                                                            These features reduced so many complexities in cabling, power consumption and etc.

                                                                                                             

                                                                                                            AND ...

                                                                                                            Virtualizaton has a very big feature that hardware driver independent platform to GuestOS and Administrator!

                                                                                                             

                                                                                                            Do you want to break away this big feature from virtualization world?

                                                                                                             

                                                                                                            IPoIB, iSER, SRP on GuestOS?

                                                                                                            Do you want to make so many Guest OS's drivers and functions?

                                                                                                            How?

                                                                                                            You must make a new subnet manager for both physical & virtualization QPs, GUIDs, etc then equip it on a new managed switches & HCAs then you'll be say it!

                                                                                                            You must purchase a new ConnectX-7 switches & HCAs then you also wait to launch new driver for ESXi after several thousands years!!!

                                                                                                            • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                                                              mpogr

                                                                                                              Hi Erez,

                                                                                                               

                                                                                                              I understand you're new to Mellanox (joined in Nov 2016), which is after I wrote my original post. I'd like to invite you to re-read it again and then try to honestly answer a very simple question: has Mellanox actually been dedicated to providing adequate solutions for the VMware platform, as you stated in your latest post? You don't have to share the answer with the rest of us here, just, please, be honest with yourself.

                                                                                                               

                                                                                                              The way I see things, with 6.5 the situation is changing from bad to worse (מבכי אל דכי, as they say in broken biblical Hebrew). At least, with ESXi 6.0 and below we could use IB switches, with ESXi 6.5 we can't. This means tons of hardware (some quite recent and with decent specs) become throwaway money.

                                                                                                               

                                                                                                              If you do have any interesting developments in the pipeline, we'd like to hear about those.

                                                                                                               

                                                                                                              Thanks!

                                                                                                                • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                                                                  bmac2099

                                                                                                                  its now CLEAR Mellanox are doing nothing about SRP

                                                                                                                   

                                                                                                                  WE JUST DUMPED our 56Gb/s VPI switches.  and replaced with arista 2 x 7050QX  and 2 x Dell s4810 switches.

                                                                                                                   

                                                                                                                  We dumped SCST/SRP in favor of VSAN on ESX 6.5 with all the new features we couldnt use becuase of no driver support in esx 6.5

                                                                                                                  chucked in nVME pro 860  500Gb cache, 8 x 1TB X 3 vSAN's Sansung Enterprise 3d SSD,s  and Intel 40GB Nics on VSAN x 3 nodes with 8TB of SSD each, and its better than SRP over 56Gb/s SCST Targt... YES BETTER, faster easier to manage.  and sooo much esier to deploy.  SRP Infiniband IS DEAD!

                                                                                                                   

                                                                                                                  SRP IS DEAD THROW OUT YOUR VDI SWITCHES ADN GO 40Gb+ Ethernet...

                                                                                                            • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                                                              galbitz@albitz.biz

                                                                                                              Erez can you clarify what you mean by esxi 6.5 having roce support but not iser? I thought iser was just iscsi over rdma, and roce was rdma over  converged ethernet. If we cannot use iser with roce what storage initiator driver is supported then with RoCE to connect back to a storage device?

                                                                                                • Re: Which ESXi driver to use for SRP/iSER over IB (not Eth!)?
                                                                                                  inbusiness

                                                                                                  Hi!

                                                                                                  I think all problem originated from VMware's native driver model.

                                                                                                  New native driver for AHCI is buggy then almost users were disable native driver then reboot ESXi host then use vKernel driver.

                                                                                                  Then I think disable all native driver form Mellanox HCA + uninstall inbox driver then install Mellanox OFED 1.8.2.5 on my ESXi 6.0 host.

                                                                                                   

                                                                                                  I'm test Mellanox SRP driver 1.8.2.5 on ESXi 6.0 like below.

                                                                                                   

                                                                                                  A. Configurations

                                                                                                   

                                                                                                  All HCA were MHQH29B-XTR with firmware 2.9.1200

                                                                                                  - If test is success then switch to all HCAs to ConnectX-3 MCX354A-FCBT

                                                                                                   

                                                                                                  2 of SX6036G FDR14 Gateway switches

                                                                                                  - Configure diffrent SM Prefix on each switches for SRP path optimization

                                                                                                   

                                                                                                  OmniOS with latest update ZFS COMSTAR SRP Target

                                                                                                   

                                                                                                  B. Installation

                                                                                                  * disable native driver for vRDMA - this is very buggy

                                                                                                  esxcli system module set --enabled=false -m=nrdma

                                                                                                  esxcli system module set --enabled=false -m=nrdma_vmkapi_shim

                                                                                                  esxcli system module set --enabled=false -m=nmlx4_rdma

                                                                                                  esxcli system module set --enabled=false -m=vmkapi_v2_3_0_0_rdma_shim

                                                                                                  esxcli system module set --enabled=false -m=vrdma

                                                                                                   

                                                                                                  * uninstall inbox driver - also useless function that can't support ethernet iSER properly

                                                                                                  esxcli software vib remove -n net-mlx4-en

                                                                                                  esxcli software vib remove -n net-mlx4-core

                                                                                                  esxcli software vib remove -n nmlx4-rdma

                                                                                                  esxcli software vib remove -n nmlx4-en

                                                                                                  esxcli software vib remove -n nmlx4-core

                                                                                                  esxcli software vib remove -n nmlx5-core

                                                                                                   

                                                                                                  * install Mellanox OFED 1.8.2.5 for ESXi 6.x.

                                                                                                  esxcli software vib install -d /var/log/vmware/MLNX-OFED-ESX-1.8.2.5-10EM-600.0.0.2494585.zip

                                                                                                   

                                                                                                  * enable SRP RDM filter

                                                                                                  * register VMware NMP path optimization rule

                                                                                                  - This is optional. I'm use now ESOS linux based SRP Target that can support all of VAAI features...:)

                                                                                                   

                                                                                                  esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -V "SUN" -M "COMSTAR" -P "VMW_PSP_RR" -O "iops=1" -e "Sun ZFS COMSTAR" -c "tpgs_on"

                                                                                                   

                                                                                                  esxcli storage core claimrule load

                                                                                                   

                                                                                                   

                                                                                                  then reboot the ESXi 6.0 host

                                                                                                   

                                                                                                  All ESXi 6.0 host works properly 3 days.

                                                                                                  Some weeks later I'll report it on this threads..:)

                                                                                                   

                                                                                                  P.S

                                                                                                  If all ESXi 6.0 works properly for 1 weeks then I'll switched all host to ESXi 6.5 for another test...:)

                                                                                                   

                                                                                                  PS2 (Updated in 20 June, 2017)

                                                                                                  All of ESXi 6.0, 6.5 test works perfectly...:)

                                                                                                   

                                                                                                  But on ESXi 6.5 P01 has a problem with In-Guest SCSI unmap that cause VMDK performance Problems, not SRP Target.

                                                                                                  I'm work in ESXi 6.0 latest build with ESOS SCST SRP Target that show me a perfect performance and rock solid!

                                                                                                   

                                                                                                  All Problem was VMware's inbox native vRDMA driver that cause ESXi 6.x PSOD conflict with SRP driver 1.8.2.5.

                                                                                                   

                                                                                                  Be stand!

                                                                                                  I think VMware launch new ESXi 6.5 update 1 on July, 2017.

                                                                                                  I'm waiting for IPoIB Infiniband iSER native driver...:)

                                                                                                  But I'm afraid that there is a possibility that VMware & Mellanox discontinue support ConnectX-3 HCA...:(

                                                                                                  ESXi 6.5 inbox driver can support Ethernet iSER on ConnectX-4 only now.

                                                                                                  If ConnectX-3 will discontinue support iSER that IPoIB or Ethernet, l'll change CX-3 operation mode from FDR Infiniband to 40Gb Ethernet mode then change lab storage from ESOS to ScaleIO.next...:)

                                                                                                  1 of 1 people found this helpful