HowTo Install Mirantis Fuel 6.0 OpenStack with Mellanox Adapters Support (Ethernet Network)

Version 6
    This post shows how to set up and configure Mirantis Fuel ver. 6.0 (OpenStack Juno based on CentOS 6.5) to support Mellanox ConnectX-3 adapters to work in SR-IOV mode for the VMs on the compute nodes, and in iSER (iSCSI over RDMA) transport mode for the storage nodes. This post is based on HowTo Install Mirantis Fuel 5.1/5.1.1 OpenStack with Mellanox Adapters Support (Ethernet Network) with minor changes.

    Related references:


    Before reading this post, make sure you are familiar with Mirantis Fuel 6.0 installation procedures.

    It is also recommended to watch the movie HowTo Install Mirantis Fuel 5.1 OpenStack with Mellanox Adapters


    Setup Diagram:

    Setup Diagram.jpg
    Note: Besides the Fuel Master node, all nodes should be connected to all five networks.


    Note: Server’s IPMI and the switches management interfaces wiring and configuration are out of scope.

    You need to ensure that there is management access (SSH) to Mellanox Ethernet switch SX1036 to perform the configuration.





    Setup BOM:

    Fuel Master server1

    DELL PowerEdge R620

    • CPU: 2 x E5-2650 @ 2.00GHz
    • MEM: 128 GB
    • HD: 2 x 900GB SAS 10k in RAID-1

    Cloud Controllers and Compute servers

    • 3 x Controllers
    • 3 x Computes

    DELL PowerEdge R620

    • CPU: 2 x E5-2650 @ 2.00GHz
    • MEM: 128 GB
    • HD: 2 x 900GB SAS 10k in RAID-1
    • NIC: Mellanox ConnectX-3Pro VPI (MCX353-FCCT)
    Cloud Storage server1

    Supermicro X9DR3-F

    • CPU: 2 x E5-2650 @ 2.00GHz
    • MEM: 128 GB
    • HD:
      • 2x900GB SAS, Configured with RAID-1
      • 22x SSD drives Samsung 830 series for Cinder volume
    • RAID Ctrl: LSI Logic MegaRAID SAS 2208 with battery
    • NIC: Mellanox ConnectX-3Pro VPI (MCX353-FCCT)
    Admin  (PXE) and Public switch11Gb switch with VLANs configured to support both networks
    Cloud Ethernet Switch1Mellanox SX1036 40/56Gb 36 port Ethernet

    16 x 1Gb CAT-6e for Admin (PXE) and Public networks

    7 x 56GbE copper cables up to 2m (MC2207130-XXX)

    Note: You can use Mellanox ConnectX-3 PRO EN (MCX313A-BCCT) or Mellanox ConnectX-3 Pro VPI (MCX353-FCCT) adapter cards.
    Note: SR-IOV should be enabled in the BIOS for all cloud compute servers.

    Server-Role Mapping:

    To make smoother progress, you can map roles to physical servers.
    It’s useful to write MAC addresses for the Admin (PXE) interface. The last 4 Hexadecimal digits will be used as unique identifier of the server.
    For example, this is the map created for this setup:
    Server nameRoleMac Address
    Server-1Fuel MaserBC:30:5B:F3:26:68

    Storage server RAID Setup:

    • 2 SSD drives in bays 0-1 configured in RAID-1 (Mirror): The OS will be installed on it.
    • 22 SSD drives in bays 3-24 configured in RAID-10: The Cinder volume will be configured on the RAID drive.

    Network Physical Setup:

    1. Connect all nodes to the Admin (PXE) 1GbE switch (preferably through the eth0 interface on board).
      It is recommended to write the MAC address of the Controller and Storage servers to make Cloud installation easier (see Controller Node section below in Nodes tab).

      Note: all cloud servers should be configured to run PXE boot over the Admin (PXE) network.

    2. Connect all nodes to the Public 1GbE switch (preferably through the eth1 interface on board).
    3. Connect port #1 (eth2) of ConnectX-3 Pro to SX1036 Ethernet switch (Private, Management, Storage networks).
      Note: The interface names (eth0, eth1, p2p1, etc.) may vary between servers from different vendors.
      Note: Port bonding is not supported when using SR-IOV over the ConnectX-3 adapter family.
    Rack Setup Example:
    2.Rack Diagram.jpg
    Fuel Node:
    3.Fuel Node.JPG.jpgCompute and Controller Nodes:
    4.Compute and Controller.JPG.jpg

       4. Configure the required VLANs and enable flow control on the Ethernet switch ports.

    All related VLANs should be enabled on the 40/56GbE switch (Private, Management, Storage networks).
    On Mellanox switches, use the command flow below to enable VLANs (e.g. VLAN 1-100 on all ports).
    Note: Refer to the MLNX-OS User Manual to get familiar with switch software (located at
    Note: Before starting use of the Mellanox switch, it is recommended to upgrade the switch to the latest MLNX-OS version.

    switch > enable

    switch # configure terminal

    switch (config) # vlan 1-100

    switch (config vlan 1-100) # exit

    switch (config) # interface ethernet 1/1 switchport mode hybrid

    switch (config) # interface ethernet 1/1 switchport hybrid allowed-vlan all

    switch (config) # interface ethernet 1/2 switchport mode hybrid

    switch (config) # interface ethernet 1/2 switchport hybrid allowed-vlan all


    switch (config) # interface ethernet 1/36 switchport mode hybrid

    switch (config) # interface ethernet 1/36 switchport hybrid allowed-vlan all

    Flow control is required when running iSER (RDMA over RoCE - Ethernet). On Mellanox switches, run the following command to enable flow control on the switches (on all ports in this example):

    switch (config) # interface ethernet 1/1-1/36 flowcontrol receive on force

    switch (config) # interface ethernet 1/1-1/36 flowcontrol send on force

    To save the configuration (permanently), run:
    switch (config) # configuration write
    Note: Flow control (global pause) is normally enabled by default on the servers. If it is disabled, run:
    # ethtool -A <interface-name> rx on tx on
       5. If you are running 56GbE, follow this example to set the link between the servers to the switch

    Networks Allocation (Example):

    The example in this post is based on the network allocation defined in this table:
    Admin (PXE)

    The network is used to provision and manage Cloud nodes via the Fuel Master.

    The network is enclosed within a 1Gb switch and has no routing outside.

    This is the default Fuel network.

    This is the Cloud Management network.

    The network uses VLAN 2 in SX1036 over 40/56Gb interconnect.

    This is the default Fuel network.


    This network is used to provide storage services.

    The network uses VLAN 3 in SX1036 over 40/56Gb interconnect.

    This is the default Fuel network.

    Public and

    Neutron L3

    Public network is used to connect Cloud nodes to an external network.

    Neutron L3 is used to provide Floating IP for tenant VMs.

    Both networks are represented by IP ranges within same subnet with routing to external networks.

    All Cloud nodes which have Public IP and HA functionality require an additional Virtual IP.

    For our example with 7 Cloud nodes we need 8 IPs in Public network range.

    Consider a larger range if you are planning to add more servers to the cloud later.

    In our build we will use range IP range >> for both Public and Neutron L3. IP allocation will be as follows:

    • Fuel Master IP:
    • Public Range: >> (used for physical servers)
    • Neutron L3 Range: >> (used for Floating IP pool)

    Install the Fuel Master via ISO Image:

    1. Boot Fuel Master Server from the ISO as a virtual CD (click here for the image).
    2. Press the <TAB> key on the very first installation screen which says "Welcome to Fuel Installer" and update the kernel option from showmenu=no to showmenu=yes and hit Enter. It will now install Fuel and reboot the server.
    3. After the reboot, boot from the local disk. The Fuel menu window will start.
    4. Network setup:
      1. Configure eth0 - PXE (Admin) network interface.
        Ensure the default Gateway entry is empty for the interface – the network is enclosed within the switch and has no routing outside.
        Select Apply.
      2. Configure eth1 – Public network interface.
        The interface is routable to LAN/internet and will be used to access the server.
        Configure static IP address, netmask and default gateway on the public network interface.
        Select Apply.
    5. PXE Setup
      The PXE network is enclosed within the switch.
      Do not make changes – proceed with defaults.
      Press Check button to ensure no errors are found.
    6. Time Sync
      Check NTP availability (e.g. via Time Sync tab on the left.
      Configure NTP server entries suitable for your infrastructure.
      Press Check to verify settings.9.Net.NTP.jpg
    7. Navigate to Quit Setup and select Save and Quit to proceed with the installation.
    8. Once the Fuel installation is done, you are provided with Fuel access details both for SSH and HTTP.
      Access Fuel Web UI by Use "admin" for both login and password.

    OpenStack Environment:


    Log into Fuel
    1. Open in WEB browser (for example:
    2. Log into Fuel using "admin" for both login and password.
    3. Complete the Welcome to Mirantis OpenStack form and click start using Fuel button
    Creating a new OpenStack environment:
    1. Open a new environment in the Fuel dashboard. A configuration wizard will start.
    2. Configure the new environment wizard as follows:
      • Name and Release
        • Name: TEST
        • Release: Juno on CentOS 6.5 (2014.2-6.0)
      • Deployment Mode
        • Multi-node with HA
      • Compute
        • KVM
      • Network
        • Neutron with VLAN segmentation (default)
      • Storage Backend
        • Cinder: Default
        • Glance : Default
      • Additional Services
        • None
      • Finish
        • Click Create button
    3. When done, a new TEST environment will be created. Click on it and proceed with environment configuration.

    Configuring the OpenStack Environment:

    Settings Tab
    Mellanox Neutron component
    To work with SR-IOV mode, select Install Mellanox drivers and Mellanox SR-IOV plugin.
    Note: The default number of supported virtual functions (VF) is 16. If you want to have more vNICs available, please contact Mellanox Support.
    Configure Storage
    1. To use high performance block storage, check ISER protocol for volumes (Cinder) in the Storage section.
    2. Save the settings.
    Public Network Assignment
    1. Make sure Assign public network to all nodes is checked
    Nodes Tab
    Servers Discovery by Fuel
    This section will assign Cloud roles to servers.
    First of all, servers should be discovered by Fuel. For this to happen, make sure the servers are configured for PXE boot over Admin (PXE) network.
    When done, reboot the servers and wait for them to be discovered.
    Discovered nodes will be counted in top right corner of the Fuel dashboard.
    Now, you may add UNALLOCATED NODES to the setup.
    First, you may add Controller, Storage, and then Compute nodes.
    Add Controller Nodes
    1. Click Add Node.
    2. Identify 3 controller node. Use the last 4 Hexadecimal digits of its MAC address of interface connected to Admin (PXE) network.
      Assign the node's role to be a Controller node.
    3. Click Apply Changes button
    Add Storage Node
    1. Click Add Node.
    2. Identify your storage node. Use the last 4 Hexadecimal digits of its MAC address of interface connected to Admin (PXE) network.
      In our example this is the only Supermicro server, so identification by vendor is easy.
      Select this node to be a Storage - Cinder node.
    3. Click Apply Changes button
    Add Compute Nodes
    1. Click Add Node.
    2. Select all the nodes that are left and assign them the Compute role.
    3. Click Apply Changes.
    Configure Interfaces
    In this step, we will map each network to a physical interface for each node.
    You can choose and configure multiple nodes in parallel.
    Fuel will not let you to proceed with bulk configuration if HW differences between selected nodes (like the number of network ports) are detected.
    In this case the Configure Interfaces button will have an error icon (see below).
    24.ConfInterfaces.Error.jpgThe example below allows configuring 6 nodes in parallel. The 7th node (Supermicro storage node) will be configured separately.
    1. In this example, we set the Admin (PXE) network to eth0 and the Public network to eth1.
    2. The Storage, Private and Management networks should run on the ConnectX-3 adapters 40/56GbE port.
      This is an example:
    3. Click Back To Node List and perform network configuration for Storage Node
    Note: Port bonding is not supported when using SR-IOV over ConnectX-3 Pro adapter family.
    Configure Disks
    There is no need to change the defaults for Controller and Compute nodes unless you are sure changes are required.
    For the Storage node it is recommended to allocate only high performing RAID as Cinder storage. The small disk shall be allocated to Base System.
    1. Select Storage node
    2. Press Configure Disks button
    3. Click on sda disk bar, set Cinder allowed space to 0 MB and make Base System occupy the entire drive – press USE ALL ALLOWED SPACE.
    4. Press Apply.
    Networks Tab
    Note: In our example, Public network does not use VLAN. If you use VLAN for Public network You should check Use VLAN tagging and set proper VLAN ID.

    In this example, we select VLAN 2 for the management network. The CIDR left untouched.
    In this example, we select VLAN 3 for the storage network. The CIDR is left untouched.
    Neutron L2 Configuration
    In this example, we set the VLAN range to 4-100. It should be aligned with the switch VLAN configuration (above).
    The base MAC is left untouched.
    Neutron L3 Configuration:
    Floating IP range: Configure it to be part of your Public network range, in this example, we select
    Internal Network: Leave CIDR and Gateway with no changes.
    Name servers: Leave DNS servers with no changes.

    Save Configuration
    Click Save Settings at the bottom of page
    Verify Networks
    Click Verify Networks.
    You should see the following message: Verification succeeded. Your network is configured correctly. Otherwise, check the log file for troubleshooting.


    Click the Deploy Changes button and view the installation progress at the nodes tab and view logs.

    Health Test

    1. Click the Health Test tab.
    2. Check the Select All checkbox.
    3. Un-check Platform services functional tests (image with special packages is required).

    4. Un-check Launch instance, create snapshot, launch instance from snapshot in Functional tests group.

      (This is known issue that snapshot creation of running SRIOV instance fails.  See Known Issues for more details)
    5. Click Run Tests.
    All tests should pass. Otherwise, check the log file for troubleshooting.
    You can now safely use the cloud
    Click the dashboard link at the top of the page.

    Bug in Launchpad
    Usernames and Passwords:
    • Fuel server Dashboard user / password: admin / admin
    • Fuel server SSH user / password: root / r00tme
    • TestVM SSH user / password: cirros / cubswin:)
    • To get controller node CLI permissions run:  # source /root/openrc



    Prepare Linux VM Image for CloudX:

    In order to have network and RoCE support on the VM, MLNX_OFED (2.2-1 or later) should be installed on the VM environment.

    MLNX_OFED may be downloaded from

    (In case of CentOS/RHEL OS, you can use virt-manager to open existing VM image and perform MLNX_OFED installation).




    Known Issues:

    You also can find limitations for the Mellanox SR-IOV plug-in here:

    Issue #
    Bug in Launchpad
    The default number of supported virtual functions (VFs),16, is not sufficient.
    To have more vNICs available, contact Mellanox Support.
    Hypervisor crash on instance (VM) termination
    Please contact Mellanox Support.
    356Gb links are discovered by Fuel as 10GbNo action is required. Actual port speed is 56Gb. After deployment ports are re-discovered as 56Gb.
    4Snapshot creation of running instance failsTo work this issue around, shut down the instance before taking a snapshotLP1398986
    53rd party adapters based on the Mellanox chipset may not have SR-IOV enabled by defaultApply to the device manufacturer for configuration instructions and for the required firmware.