I see ( http://www.mellanox.com/page/osv_support_ib ) that Infiniband MLNX_OFED 2.0 is supported on XenServer 6.1.0.
Will MLNX_OFED 2.0 be supported on 6.2.0 ?
No due date yet but it's on the road map.
Can I use Infiniband in Dom0 (host) with remote disk storage (eg. iSCSI/iSER or SRP) ?
Can I use Infiniband in Dom0 (host) for networking (eg. IPoIB) ?
Can I use Infiniband in Dom0 (host) for live migration (XenMotion) of DomU (eg. accelerate IP over Infiniband for example with Infiniband SDP) ?
This sounds like it should be feasible from a stack point of view, however I am not aware of a current working implementation.
Can be Infiniband card (eg. QP and CQ) virtualized to DomU (guest) (see http://xen.xensource.com/files/summit_3/xs0106_virtualizing_infiniband.pdf and http://www.mellanox.com/pdf/presentations/xs0106_infiniband.pdf ) any progress from 2006?
I don't believe we have a product/solution that would be similar to this other than SR-IOV which going forward I believe will be the preferred implementation used for this effect.
Is SR-IOV for DomU (guest) supported (with MLNX_OFED 2.0) in XenServer for all ConnectX cards?
I am not sure it will be all ConnectX cards. ConnectX is EOL but I am confident it supports ConnectX2 and ConnectX3 generations.
I just need to clear a few things up.
We have been using Infiniband with Xenserver (XS) since v6.0 below are our findings:
OFED1.5.4 compiles and runs on XS v6.0. IPoIB is the only functioning transport mechanism. SDP, SRP and iSER DO NOT WORK. They are naming conflicts between the OFED stack and Xen kernel which prevent the SRP modules from loading, and whilst the iSER module loads, when you attempt to create a connection using iSER as transport it crashes.
This is due to a bug in the kernel tree that was introduced in v6 and not rectified (as infiniband is outside vendor support for Xenserver), I note that iSER reportedly was working correctly in XS 5.6.
In XS 6.1 OFED fails to compile due to duplication of some naming issues in the network stack, we managed to hack 1.5.4 to compile. I note this is when Mellanox introduced OFED 2.0, however this only supports the later model cards, so if you run 10Gb infiniband product, this won't work. Once again IPoIB is the only transport that works.
In XS 6.2, our hacked version of OFED 1.5.4 compiles fine, however OFED 2.0 breaks, I believe there is a method within the 2.0 packaging to compile against a new kernel, however we did not try this.
We have been mulling running dom0 passthrough and setting up storage on a virtual machine, however this is currently a hack and is not past alpha level yet.
In summary, It would be really nice for Mellanox to work with the new (and improved) Xen opensource project to get infiniband drivers into the kernel. It makes sense for the best storage transport to function in the most deployed Hypervisor.
following your word, I have tried to compile OFED v 1.5.4 into xenserver environment, but:
- ./mlnx_add_kernel_support.sh stop with error: "kernel-ib was not created", but kernel_ib and kernel_ib_devel rpm are created and I found these into subdir /root-of-mlx-tmp-work/MLNX_OFED_SRC-1.5.3-4.0.42/RPMS/centos-release-5-7.el5.centos/i686/
At the end of my test, I have get compiled package "kernel-ib" and "ofed-script" and I have installed these into xenserver.
During boot somes modules aren't loaded, error displayed on openibd startup are:
Loading Mellanox MLX4 HCA driver: [FAILED] Loading Mellanox MLX4_EN HCA driver: [FAILED] Loading HCA driver and Access Layer: [FAILED]
BUT, if I try "service network restart", system UP my ib0 device.
Now, I have try with OFED 1.5.4 for centos 5.7.
But, is correct?
Which version of ofed should be used to have "compiles fine"?
It looks like you are trying to install OFED 1.5.3, not 1.5.4
If you are using connectx2/3 cards you can simply install OFED 2.0
If you are using the older 10Gb cards, you'll need to install 1.5.4. (1.5.3 may work, but I have not tried to get this working). There are some conflicts you will need to resolve first in the ofa_kernel tree.
Remove line 9 (definition of 'IS_ERR_OR_NULL') from:
Remove 'netif_set_real_num_tx_queues' function definition from:
simply run ./install.pl --all and the RPM's will be built.
Dave, many thanks for your answer!
Sorry, but I'm very confused with OFED version.
I have downloaded from mellanox website a last available 1.5 and it are a 1.5.3
1.5.4 are unavailable.
For version 2.0, only available are under Mellanox website, in openfabrics.org are only 1.5 and 3.5!
Now, how I can get good version?
- 1.5.4 from openfabrics.org
- 2.0 from mellanox.com
Yes, 2.0 from Mellanox site -> http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers
1.5.4 is a little trickier to find, it is on the OFED site, you can find the entire version tree here -> http://www.openfabrics.org/downloads/OFED/
I have been working on getting this up and running. We're using XenServer 6.2.
I made the changes that you suggested and when I attempt to install the OFED package (modified 220.127.116.11) I get this line:
/lib/modules/18.104.22.168-0.4.1.xs22.214.171.1245.170778xen/build/scripts is required to build kernel-ib RPM.
Can someone please help me uderstand why this is comng up? I assume the file or directory is missing. Why would this be?
Are you building this in the DDK VM ? or a production installation of XS?
Production XS doesn't have the dev libs.
This is in the prod version. When I first attempted the install it listed a
long list of packages that were missing. I installed them by modifying the
repo list and using Yum. Among these were several devel sets. Any idea
which package I'm missing?
Thank you for the reply.
Sent from my iPhone
Here is a better question. When the install is complete on the DDK VM does
the process create an rpm that I can install on the production version?
This is my first forray into the infiniband world. We bought the 10GB cards
and are looking to use infiniband as the back end transport for iSCSI
storage. From what I've read, these cards only support IPoIB. Even so, we
should see a performance gain over 10GBe, yes?
Our storage system recognized and installed drivers for the card with no
issue (DSS V6).
We're not looking to use infiniband to connect the Xenservers, only to
access the iSCSI SAN.
It's unfortunate that Mellanox isn't supporting these cards in the OFED
2.0. A lot of home users and small businesses would benefit from it. We
can't afford to spend $15,000 on a back end transport system.
Thank you for taking the time to help.
When you compile OFED 1.5.4 on the DDK it will build out all the required .rpm's for you to install on your production systems.
To be honest, if you can turn back now, i would not use Infiniband with XenServer. Performance with 10Gb Infiniband and IPoIB will be equivalent to 4Gb Fibre Channel. If you use Connectx2 cards and switches performance is similar to 10Gb Ethernet (IPoIB).
If you use a hypervisor with native support with infiniband (KVM, HyperV), you'll be able to use your infrastructure to its potential.
If you can wait, Xenserver will be released as a toolstack sometime this year. This means it will work with latest vanilla kernel which has native support for infiniband built in.
Thank you for the advice!
The cards and switches we picked up are all DDR but it seems we would still
be taking a step backwards until infiniband is natively supported in the
Thank you again for your help.
Sent from my iPhone
In looking a bit further, there is no build directory in that tree. Why would the install.pl script reference it unless it should be there?