I'm not sure Mellanox is "favoriting" iser over srp as a principle, It's just that iser is more scalable and more robust then srp within ConnectX-3 product.
Correct. currently only mlnx-NATIVE-ESXi6.0-ConnectX-4 ETH Driver is available and no VPi/IB support is scheduled for the near future.
suggesting to post our website from time to time and get updated on the product release
You said no VPI/IB support is scheduled for the near furure.
I think you must remove some specification from your VPI product spec sheet.
ie. SRP, iSER, vSphere (It can only support VxLAN offloading on vSphere)...:(
I have to agree with Jae-Hoon above, these cards are Ethernet only on ESXi and it is definitely stated otherwise, I also show them with a release in 2014 so its really a shame we still do not have that capability. Id rather see you release SRP support vs take it out of your marketing material though =). Is it that difficult to port your existing code from Linux to Esxi? That is the worst part of this in my opinion.. you have already done it with Connectx-4 cards under Linux, and you have already done it with Connectx-3 cards under Linux/VMware. Mellanox seems ok with letting the supported features lessen with future product generations and it is alarming.
I agree with you...:)
I think most problem caused from changed environments.
At first, VMware moved to ESXi.
That's a compact hypervisor kernel, not a general purpose Linux kernel. The age of vSphere ESX 4.x, I saw a some difference between ESX and ESXi.
Some IB command wasn't supported in ESXi host.
vSphere VMCI feature was deprecated. I expect that will be a key point. I think RDMA support in VM environment via vmci on ESXi hypervisor.
You can find some information on google via keyword vRDMA.
Sure! There is a overhead but it is not a HPC environment. It is more fast then general purpose ethernet protocol.
VPI+EN multi function support also same.
SRIOV on virtualization environment will be a major interface for high performance vm network.