HowTo Install Mirantis OpenStack 8.0 with Mellanox ConnectX-3 Pro Adapters Support (InfiniBand Network)

Version 24

    This post describes how to set up and configure Mirantis Fuel version 8.0 (OpenStack Liberty based on Ubuntu 14.04 LTS) over an InfiniBand 56Gb/s network with support for SR-IOV mode for the VMs on the compute nodes, and in iSER (iSCSI over RDMA) transport mode for the storage nodes.

     

    To access the Ethernet 56Gb/s How-to manual, refer to the HowTo Install Mirantis Fuel 8.0 OpenStack with Mellanox Adapters Support (Ethernet Network).

    Related References

     

    Before reading this post, make sure you are familiar with Mirantis Fuel 8.0 installation procedures.

     

    Setup Diagram

     

     

     

    Note: All nodes, including the Deployment Node, should be connected to all five networks.
    Note: The server’s IPMI and the switch management interface wiring and configuration are out of the scope of this post.

     

    Ensure that there is management access (SSH) to the Mellanox InfiniBand switch to perform the configuration.

     

    Setup Hardware Requirements

    ComponentQuantityDescription
    Deployment node1DELL PowerEdge R620
    • CPU: 2 x E5-2650 @ 2.00GHz
    • MEM: 128 GB
    • HD: 2 x 900GB SAS 10k in RAID-1
    • NIC: Mellanox ConnectX-3Pro VPI (MCX353-FCCT)
    Cloud Controllers and Compute servers:
    • 3 x Controllers
    • 3 x Computes
    6HP DL360p G8
    • CPU: 2 x E5-2660 v2 @ 2.20GHz
    • MEM: 128 GB
    • HD: 2 x 450GB 10k SAS (RAID-1)
    • NIC: Mellanox ConnectX-3Pro VPI (MCX353-FCCT)
    Cloud Storage server1Supermicro X9DR3-F
    • CPU: 2 x E5-2650 @ 2.00GHz
    • MEM: 128 GB
    • HD: 24 x 6Gb/s SATA Intel SSD DC S3500 Series 480GB (SSDSC2BB480G4)
    • RAID Ctrl: LSI Logic MegaRAID SAS 2208 with battery
    • NIC: Mellanox ConnectX-3Pro VPI (MCX353-FCCT)
    Admin  (PXE) and Public switch11Gb switch with VLANs configured to support both networks
    InfiniBand SwitchMellanox SX1710 VPI 36 ports 56Gb/s switch configured in InfiniBand mode.
    Cables16 x 1Gb CAT-6e for Admin (PXE) and Public networks8 x 56GbE copper cables up to 2m (MC2207130-XXX)

     

    Note: You can use Mellanox ConnectX-3 Pro EN (MCX313A-BCCT) or Mellanox ConnectX-3 Pro VPI (MCX353-FCCT) adapter cards.

    Note: Please make sure that the Mellanox switch is specified as an InfiniBand switch.

    Storage Server RAID Setup

    • Two SSD drives in bays 0-1 configured in RAID-1 (Mirror): The OS will be installed on it.
    • Twenty-two SSD drives in bays 3-24 configured in RAID-10: The Cinder volume will be configured on the RAID drive.

    storage_raid.png

    Physical Network Setup

    1. Connect all nodes to the Admin (PXE) 1GbE switch, preferably through the eth0 interface on board.
      We recommend that you record the MAC address of the Controller and Storage servers to make Cloud installation easier (see Controller Node section below in Nodes tab).

      Note: All cloud servers should be configured to run PXE boot over the Admin (PXE) network.

    2. Connect all nodes to the Public 1GbE switch (preferably through the eth1 interface on board).
    3. Connect Port #1 (ib0) of ConnectX-3 Pro to an InfiniBand switch (Private, Management, Storage networks).

      Note: The interface names (eth0, eth1, p2p1, etc.) can vary between servers from different vendors.

      Note: In this article the Subnet manager (OpenSM) will run on the Deployment node, bt it can run on any server that is connected to SwitchX.

    Rack Setup Example


    Deployment, Compute and Controller Nodes


    Storage Node
    This is the same as Compute and Controller nodes.
    Storage.png

    Note: Refer to the MLNX-OS User Manual to become familiar with switch software (located at support.mellanox.com).
    Note: Before starting to use the Mellanox switch, we recommend that you upgrade the switch to the latest MLNX-OS version.

     

    Networks Allocation (Example)

    The example in this post is based on the network allocation defined in this table:

    NetworkSubnet/MaskGatewayNotes
    Admin (PXE)10.20.0.0/24N/AThe network is used to provision and manage Cloud nodes by the Fuel Master. The network is enclosed within a 1Gb switch and has no routing outside. 10.20.0.0/24 is the default Fuel subnet and we use it with no changes.
    Management192.168.0.0/24N/AThis is the Cloud Management network. The network uses VLAN 2 in SX1710 over 40/56Gb interconnect. 192.168.0.0/24 is the default Fuel subnet and we use it with no changes.
    Storage192.168.1.0/24N/AThis network is used to provide storage services. The network uses VLAN 3 in SX1710 over 40/56Gb interconnect. 192.168.1.0/24 is the default Fuel subnet and we use it with no changes.
    Public and Neutron L310.7.208.0/2410.7.208.1

    Public network is used to connect Cloud nodes to an external network.

    Neutron L3 is used to provide Floating IP for tenant VMs.

    Both networks are represented by IP ranges within same subnet with routing to external networks.


    All Cloud nodes will have Public IP address. In additional you shall allocate 2 more Public IP addressees ·

    • One IP required for HA functionality ·
    • Virtual router requires additional Public IP address.

    We do not use virtual router in our deployment but still need to reserve Public IP address for it.   So Public Network range is an amount of cloud nodes + 2. For our example with 7 Cloud nodes we need 9 IPs in Public network range.

    Note: Consider a larger range if you are planning to add more servers to the cloud later.

    In our build we will use 10.7.208.53 >> 10.7.208.76 IP range for both Public and Neutron L3.

    IP allocation will be as follows:

    • Deployment node: 10.7.208.53
    • Fuel Master IP: 10.7.208.54
    • Public Range: 10.7.208.55 >> 10.7.208.63 (7 used for physical servers, 1 reserved for HA and 1 reserved for virtual router)
    • Neutron L3 Range: 10.144.254.64 >> 10.144.254.76 (used for Floating IP pool)

     

    The scheme below illustrates our setup IP allocation.

     

    Install Deployment Server

    In our setup we install the CentOS release 7.2 64-bit distribution. We used the CentOS-7-x86_64-Minimal-1511.iso image and installed Minimal configuration. We will install all missing packages later.

    The two 1Gb interfaces are connected to Admin (PXE) and Public networks:

      • em1 (first interface) is connected to Admin (PXE) and configured statically.
        The configuration will not actually be used, but will save time on bridge creation later.
        • IP: 10.20.0.1
        • Netmask: 255.255.255.0
        • Gateway: N/A
      • em2 (second interface) is connected to Public and configured statically:
        • IP: 10.7.208.53
        • Netmask: 255.255.255.0
        • Gateway: 10.7.208.1

     

    InfiniBand Configuration

    Since InfiniBand compliant Subnet Manager and Administration (OpenSM) runs on the Deployment node, Mellanox OFED should be installed on this server and configure PKEYs in the partition.conf file. No extra configurations are required on the switch side.

    Note: All the PKeys should be predefined in the partitions.conf file (/etc/opensm/partitions.conf).

    All PKeys are open on all servers ports  (as specified in the SM configurations). You can choose which VLAN  you want for which network;the mapping between VLAN to PKey is done in the background.

    1. Download the Mellanox OFED Driver.

      Download MLNX_OFED version 3.2 of the ISO file from MLNX_OFED Download Center to the Deployment node into /tmp directory.
    2. Install MLNX_OFED version 3.2.

    a. Install dependencies for MLNX_OFED.

    # yum install gtk2 tcl gcc-gfortran tk

     

    b. Create a new folder to mount just downloaded ISO image.

    # mkdir /tmp/ofed

     

    c. Go to /tmp folder and mount the downloaded ISO image file.

    # mount -o loop /tmp/MLNX_OFED_LINUX-3.2-2.0.0.0-rhel7.2-x86_64.iso /tmp/ofed/

     

    d. Go to the folder where MLNX_OFED was unpacked.

    # cd /tmp/ofed

     

    e. Install MLNX_OFED as follows:

    # ./mlnxofedinstall --all

     

    f. Reboot the Deployment node or use the following command:

    # /etc/init.d/openibd restart

    For more information, please refer to the official how-to.

     

    3. Complete the InfiniBand OpenSM Configuration.
        In this step we are going to create two files on the Fuel node: /etc/opensm/partitions.conf and /etc/opensm/opensm.conf.

      1. Create a new opensm.conf file.
        # opensm -c /etc/opensm/opensm.conf
      2. Enable virtualization by editing /etc/opensm/opensm.conf and changing the allow_both_pkeys value to TRUE.
        allow_both_pkeys TRUE
      3. Define the partition keys, which are analog for Ethernet VLAN. Each VLAN will be mapped to one PK.
        Add/Change using the following command:
        # vi /etc/opensm/partitions.conf
        Example:
        management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full;
        vlan1=0x1, ipoib, sl=0, defmember=full : ALL;
        vlan2=0x2, ipoib, sl=0, defmember=full : ALL;
        vlan3=0x3, ipoib, sl=0, defmember=full : ALL;
        vlan4=0x4, ipoib, sl=0, defmember=full : ALL;
        vlan5=0x5, ipoib, sl=0, defmember=full : ALL;
        vlan6=0x6, ipoib, sl=0, defmember=full : ALL;
        vlan7=0x7, ipoib, sl=0, defmember=full : ALL;
        vlan8=0x8, ipoib, sl=0, defmember=full : ALL;
        vlan9=0x9, ipoib, sl=0, defmember=full : ALL;
        vlan10=0xa, ipoib, sl=0, defmember=full : ALL;
        . . .
        vlan100=0x64, ipoib, sl=0, defmember=full : ALL;

        Note: In this example:

        VLAN2 assigned to PK 0x2 and will be used for Openstack Management network

        VLAN3 assigned to PK 0x3 and will be used for Openstack Storage network

        VLANs 4 through 100 are assigned to PKs 0x4 to 0x64 will be used for Tenant networks

      4. Restart the OpenSM.
        # /etc/init.d/opensmd restart
        Note: VLAN1 is defined, but not used for consistency with Ethernet setup installation.
        Note: The maximum number of VLANs is 128.

     

    Configure Deployment Node for Running Fuel VM

    Login to Deployment node by SSH or locally and perform actions listed below:

    1. Disable the Network Manager.

      # sudo systemctl stop NetworkManager.service
      # sudo systemctl disable NetworkManager.service

    2. Install the packages required for virtualization.
      # sudo yum install qemu-kvm libvirt libvirt-python libguestfs-tools virt-install virt-manager
    3. Install packages, required for x-server.
      # sudo yum install xorg-x11-xauth xorg-x11-server-utils xclock
    4. Reboot the Deployment server.

     

    Create and Configure a New VM To Run the Fuel Master Installation

    1. Start virt-manager.

      # virt-manager

    2. Create a new VM using the virt-manager wizard.
    3. During the creation process provide VM with four cores, 8GB of RAM and 200GB disk.
      Note: For details see Fuel Master node hardware requirements section in Mirantis OpenStack Planning Guide — Mirantis OpenStack v8.0 | Documentation.
    4. Mount the Fuel installation disk to VM virtual CD-ROM device.
    5. Configure network so the Fuel VM will have 2 NICs connected to Admin(PXE) and Public networks
      1. Use virt-manager to create bridges
        1. br-em1 - Admin(PXE) network bridge.
          It is used to connect Fuel VM's eth0 to Admin (PXE) network.
        2. br-em2 - Public network bridge
          It is used to connect Fuel VM's eth1 to Public network.
      2. Connect Fuel VM eth0 interface to br-em1
      3. Add to Fuel VM eth1 network interface and connect it to br-em2

        Note: You can define any other names for bridges. In this example names were defined according to names of connected physical interfaces of deployment node.
    6. Save settings and start VM.

     

    Install the Fuel Master from ISO Image:

    Note: Avoid starting the other nodes except for the fuel master until Mellanox plugin is installed.
    1. Boot Fuel Master Server from the ISO image as a virtual DVD (click here to download ISO image).
    2. Choose option 1. and press TAB button to edit default options:

      Remove the default gateway (10.20.0.1)
      Change DNS to 10.20.0.2 (the Fuel VM IP)
      add to the end "showmenu=yes"
      The tuned boot parameters shall look like this:

      Note: Do not try to change eth0 to another interface or deployment may fail.
    3. Fuel VM will reboot itself after initial installation completed and Fuel Menu will appear.
      Note: Ensure that VM will start from Local Disk and not CDROM. Otherwise you'll start installation from beginning.
    4. Network setup:
      1. Configure eth0 - PXE (Admin) network interface.
        Ensure the default Gateway entry is empty for the interface – the network is enclosed within the switch and has no routing outside.
        Select Apply.
      2. Configure eth1 – Public network interface.
        The interface is routable to LAN/internet and will be used to access the server. Configure static IP address, netmask and default gateway on the public network interface.
        Select Apply.
    5. Set the PXE Setup.
      The PXE network is enclosed within the switch. Do not make any changes, proceed with defaults.

      Press the Check button to ensure no errors are found.
    6. Set the Time Sync.
      Choose Time Sync tab on the left menu
      Configure NTP server entries suitable for your infrastructure.
      Press Check to verify settings.
    7. Proceed with the installation.
      Navigate to Quit Setup and select Save and Quit.

      Once the Fuel installation is done, you are provided with Fuel access details both for SSH and HTTP.
    8. Configure Fuel Master VM SSH server to allow connections from Public network
      By default Fuel will accept SSH connections from Admin(PXE) network only.
      Follow the below steps to allow connections from Public Network:
      1. Use virt-manager to access Fuel Master VM console
      2. Edit sshd_config:
        # vi /etc/ssh/sshd_config
      3. Find and comment line:
        ListenAddress 10.20.0.2
      4. Restart sshd:
        # service sshd restart
    9. Access Fuel  by:
      • Web UI by http://10.7.208.54:8000 (use admin/admin as  user/password)
      • SSH by connect to 10.7.208.54 (use root/r00tme as user/password)

     

     

    Install Mellanox Plugin

    Mellanox plugin configures support for Mellanox ConnectX-3 Pro network adapters, enabling high-performance SR-IOV compute traffic networking, iSER (iSCSI) block storage networking which reduces CPU overhead, boosts throughput, reduces latency, and enables network traffic to bypass the software switch layer.

     

    Follow the steps below to install the plugin. For the complete instructions, please refer to: HowTo Install Mellanox OpenStack Plugin for Mirantis Fuel 8.0)

    1. Login to the Fuel Master, download the Mellanox plugin rpm from herehttp://plugins.mirantis.com/repository/m/e/mellanox-plugin/mellanox-plugin-3.0-3.0.0-1.noarch.rpmand store it on your Fuel Master server
      [root@fuel ~]# wget http://bgate.mellanox.com/openstack/openstack_plugins/fuel_plugins/8.0/plugins/mellanox-plugin-3.0-3.0.0-1.noarch.rpm
    2. Install plugin from download directory:
      # fuel plugins --install=mellanox-plugin-3.0-3.0.0-1.noarch.rpm
      Note: The Mellanox plugin replaces the current bootstrap image, the original image is backed up in /opt/old_bootstrap_image/
    3. Verify that the plugin was successfully installed.It should be displayed when running the fuel plugins command.
    4. Create new Bootstrap image to enable Mellanox HW detection:
      Run create_mellanox_vpi_bootstrap command
      # create_mellanox_vpi_bootstrap
    5. Reboot all, even already discovered nodes.
      To reboot already discovered nodes run command
      reboot_bootstrap_nodes -a
    6. Run the fuel nodes command to check if there are any discovered nodes already.

     

     

    Creating a new OpenStack Environment

    Open in WEB browser (for example: http://10.7.208.54:8000) and log into Fuel environment using admin/admin as the username and password.

     

    1. Open a new environment in the Fuel dashboard. A configuration wizard will start.
    2. Configure the new environment wizard as follows:
        • Name and Release
          • Name: SR-IOV IB
          • Release: Liberty on Ubuntu 14.04
        • Compute
          • QEMU-KVM
        • Network
          • Neutron with VLAN segmentation
        • Storage Backend
          • Block storage: LVM
        • Additional Services
          • None
        • Finish
          • Click Create button
    3. Click on the new environment created and proceed with environment configuration.

     

    Configuring the OpenStack Environment

    Settings Tab

    1. Enable KVM hypervisor type.
      KVM is required to enable Mellanox Openstack features
      Open the Settings tab, select Compute section and then choose KVM hypervisor type.
      Open the Settings tab, select Compute section and then choose KVM hypervisor type.

    2. Enable desired Mellanox OpenStack features.
      1. Open the Other section
      2. Select relevant Mellanox plugin versions if you have multiple versions installed

      3. Enable SR-IOV support
        1. Check SR-IOV direct port creation in private VLAN networks (Neutron)
        2. Set desired number of Virtual NICs

          Note: Relevant for VLAN segmentation only
          Note: Number of virtual NICs is amount of virtual functions (VFs) that will be available on Compute node.
          Note: One VF will be utilized for iSER storage transport if you choose to use iSER. In this case you will get 1 VF less for Virtual Machines.
        3. Support quality of service over VLAN networks and ports (Neutron)
          If selected, Neutron "Quality of service" (QoS) will be enabled for VLAN networks and ports over Mellanox HCAs.
          Note: This feature is supported only in Ethernet setup. It is NOT supported in Infiniband mode.
        4. iSER protocol for volumes (Cinder)
          By enabling this feature you’ll use iSER block storage transport instead or ISCSI.
          iSER stands for ISCSI Extension over  RDMA and improver latency, bandwidth and reduce CPU overhead
          Note: A dedicated Virtual Function will be reserved for a storage endpoint, and the priority flow control has to be enabled on the switch side port.

     

    Nodes Tab

    Servers Discovery by Fuel

    This section assigns Cloud roles to servers. Servers should be discovered by Fuel, hence, make sure the servers are configured for PXE boot over Admin (PXE) network. When done, reboot the servers and wait for them to be discovered. Discovered nodes will be counted in top right corner of the Fuel dashboard.

     

    Now you may add UNALLOCATED NODES to the setup.
    First you may add Controller, Storage, and then Compute nodes.

     

    1. Access the Nodes tab in your environment.
    2. Click on the cog wheel on the right of a node:
      configure.png
    3. Verify that the driver is eth_ipoib and the bus info is an IB interface.

     

    Add Controller Nodes

    1. Click Add Node.
    2. Identify 3 controller node. Use the last 4 Hexadecimal digits of its MAC address of interface connected to Admin (PXE) network. Assign the node's role to be a Controller node.
    3. Click Apply Changes button

    Add Storage Node

    1. Click Add Node.
    2. Identify your storage node. Use the last 4 Hexadecimal digits of its MAC address of interface connected to Admin (PXE) network. In our example this is the only Supermicro server, so identification by vendor is easy. Select this node to be a Storage - Cinder node.
    3. Click Apply Changes button

    Add Compute Nodes

    1. Click Add Node.
    2. Select all the nodes that are left and assign them the Compute role.
    3. Click Apply Changes.

     

    Configure Interfaces

    In this step, each network must be mapped to a physical interface for each node. You can choose and configure multiple nodes in parallel.

    In case of HW differences between selected nodes (like the number of network ports), bulk configuration is not allowed. If you do a bulk configuration, the Configure Interfaces button will have an error icon.

     

    Select all Controller and Compute nodes (6 nodes in this example) in parallel. The 7th node (Supermicro storage node) will be configured separately

     

    In this example, we set the Admin (PXE) network to eth0 and the Public network to eth1.

    The Storage, Private and Management networks should run on the ConnectX-3 adapters 40/56GbE port. This is an example:

    Note: In some cases speed of Mellanox interface can be shown as 10Gb/s

    Click Back To Node List and perform network configuration for Storage Node by analogy.

     

    Configure disks

    Note: There is no need to change the defaults for the Controller and Compute nodes unless the changes are required. For the Storage node it is recommended to allocate only high performing RAID as Cinder storage. The small disk shall be allocated to Base System.
    1. Select the Storage node.
    2. Press the Disk Configuration button.
    3. Click on the sda disk bar, set Cinder allowed space to 0 MB and make Base System occupy the entire drive.
    4. Click Apply.

    Networks Tab

     

    Section Node Network Group - default

    Public:

    In our example nodes Public IP range is 10.7.208.55-10.7.208.63 (7 used for physical servers, 1 reserved for HA and 1 reserved for virtual router).

    See Network Allocation section at the beginning of this doc for details.

    The rest of public IP range will be used for Floating IP in Neutron L3 section below
    In our example, Public network does not use VLAN. If you use VLAN for Public network you should check Use VLAN tagging and set proper VLAN ID.

     

     

     

    Storage:

    In this example, we select VLAN 3 for the storage network. The CIDR is left untouched.

     

    Management:

    In this example, we select VLAN 2 for the management network. The CIDR is left untouched.

    Section Settings

    Neutron L2:

    In this example, we set the VLAN range to 4-100. It should be aligned with the switch VLAN configuration (above).
    The base MAC is left untouched.

     

    Neutron L3:

    Floating Network Parameters: the floating IP range is continuation of our Public IP range. In our deployment we use 10.7.208.64 - 10.7.208.76 IP range

    Internal Network: Leave CIDR and Gateway with no changes.

    Name servers: Leave DNS servers with no changes.

    Other:

    We assign Public IP to all nodes. Make sure Assign public network to all nodes is checked

    We use Neutron L3 HA option.

    For deployment we'll use 8.8.8.8 and 8.8.4.4 DNS servers as well.

    Save Configuration

    Click Save Settings at the bottom of page

     

    Verify Networks

    Click Verify Networks.
    You should see the following message: Verification succeeded. Your network is configured correctly. Otherwise, check the log file for troubleshooting.

    Note: If your public network runs a DHCP server, you can experience a verification failure. If the range selected for the cloud above is not overlapping with DHCP pool, you can ignore this message. If overlap exists, please fix it.


     

    Deployment

    Click the Deploy Changes button and view the installation progress at the nodes tab and view logs.

    OS installation will start.

     

    When OS installation is finished, OPSTK installation on 1st controller will occur.

    Then OPSTK will be installed on rest of controllers, and afterwards on Compute and Storage nodes.

     

    Installation completed.

    Health Test

    1. Click on the Health Test tab.
    2. Check the Select All checkbox.
    3. Click Run Tests.

        All tests should pass. Otherwise, check the log file for troubleshooting.You can now safely use the cloudClick the dashboard link Horizon at the top of the page.

    Start the SR-IOV VM

    In Liberty each VM can start either with standard Para-Virtual or SR-IOV network port. By default Para-Virtual, OVS connected port will be used. You need to request vnic_type direct explicitly in order to assign SR-IOV NIC to the VM. First You need to create SR-IOV Neutron port and then spawn VM with the port attached. Since this feature is not available in UI, we will describe how to start the VM from the SR-IOV network port using CLI.

    Start SR-IOV test VM.

    We provide scripts to spawn the SR-IOV test VM.

      1. Login to cloud Controller Node and source openrc.
        [root@fuel ~]# ssh node-1
        root@node-1:~# source openrc
      2. Upload test SR-IOV VM.

        root@node-1:~# upload_sriov_cirros

      3. Login to OpenStack Horizon, go to Images section, and check to see if the image you downloaded (cirros-testvm-mellanox) is present.
      4. Start to test SR-IOV VM.

        root@node-1:~# start_sriov_vm

      5. Make sure that SR-IOV works by logging in to the VM's console and running the lspci command.

        $ lscpci -k | grep mlx

        00:04:0 Class 0280: 15b3:1004 mlx4_core

     

    Start custom image VM.

    To start your own images please use the following procedure:

    Note: The VM shall have Mellanox NIC driver installed in order to get working network.

    Please, ensure the VM image <image_name> has Mellanox driver installed.
    We recommend to use  most recent Mellanox OFED. See Mellanox OpenFabrics Enterprise Distribution for Linux (MLNX_OFED) for more information.

      1. Login to the cloud Controller Node and source openrc.
      2. Create SR-IOV enabled neutron port first.

        # port_id=`neutron port-create $net_id --name sriov_port --binding:vnic_type direct | grep "\ id\ " | awk '{ print $4 }'`
        where $net_id is ID or Name of the network You want to use

      3. Then start the new VM and bind it to the port you just created.

        # nova boot --flavor <flavor_name> --image <image_name> --nic port-id=$port_id <vm_name>

        where $port_id is ID of our just created SR-IOV port
      4. To make sure that SR-IOV works, log in to the VM's console and run the lspci command.
        You should see the Mellanox VF in the output if SR-IOV works.

        # lspci | grep -i mellanox

        00:04.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]

     

    Prepare the Linux VM Image for CloudX

    In order to have network and RoCE support on the VM, MLNX_OFED (version 2.2-1, or later) should be installed on the VM environment.

    MLNX_OFED can be downloaded from http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers.

    (In case of CentOS/RHEL OS, you can use virt-manager to open the existing VM image and perform the MLNX_OFED installation).

     

     

    Known Issues:

    Issue #DescriptionWorkaround
    1Configuring more than 16 VFs is not supported by default.To have more vNICs available, please contact Mellanox Support.
    2Verify network can cause error message if public network has running DHCP server.If the range, selected for the cloud above is not overlapping with DHCP pool, please ignore this message. If an overlap exists, please fix it.
    3Verify network before installation does not support untagged VLAN verification and more than 60 PKEYs (should not affect installation).
    4Switch SM is not supported.External server based SM node with configurations is required prior to the installation.
    5Launching more than 5 VMs at once may cause eswitch allocation problems (eswitchd issue).Delete all VMs and restart eswitchd.
    6In case of partitions.conf changes in the SM machines for "verify network" after failed verification, bootstrap nodes are not updated with the latest pkeys for another network verification.Discovered nodes should be restarted for another network verification after PKEYs change (should not affect installation).
    7Migration of SR-IOV based VM is not working with Nova Migrate command.Using snapshots.
    8In some cases the speed of the Mellanox interface can be shown as 10Gb/s.
    9Amount of VLANs is limited to 128.