This post shows how to set up and configure Mirantis Fuel 6.1 (OpenStack Juno based on CentOS 6.5) to support Mellanox ConnectX-3/ConnectX-3 Pro adapters. This procedure enables SR-IOV mode for the VMs on the compute nodes, and in iSER transport mode for the storage nodes. This post is based on HowTo Install Mirantis Fuel 6.0 OpenStack with Mellanox Adapters Support (Ethernet Network) with changes.
For Infiniband 56Gb/s How-to manual, please, refer to the HowTo Install Mirantis Fuel 6.1 OpenStack with Mellanox Adapters Support (Infiniband Network).
- Mellanox CloudX for OpenStack page
- Mellanox CloudX, Mirantis Fuel Solution Guide
- Reference Architectures — Mirantis OpenStack v6.1 | Documentation
- Planning Guide — Mirantis OpenStack v6.1 | Documentation
- Mirantis Fuel ISO Download page
- MLNX-OS User Manual - (located at support.mellanox.com )
- HowTo Configure 56GbE Link on Mellanox Adapters and Switches
- HowTo upgrade MLNX-OS Software on Mellanox switches
- HowTo Configure iSER Block Storage for OpenStack Cloud with Mellanox ConnectX-3 Adapters
Before reading this post, make sure you are familiar with Mirantis Fuel 6.1 installation procedures. It is also recommended to watch HowTo Install Mirantis Fuel 5.1 OpenStack with Mellanox Adapters video which is very similar.
Note: Besides the Fuel Master node, all nodes should be connected to all five networks.
Note: Server’s IPMI and the switches management interfaces wiring and configuration are out of scope. You need to ensure that there is management access (SSH) to Mellanox InfiniBand switch SX1710 to perform the configuration.
Setup BOM (Example)
|Fuel Master server||1|
DELL PowerEdge R620
|Cloud Controllers and Compute servers:||6||DELL PowerEdge R620|
|Cloud Storage server||1||Supermicro X9DR3-F|
|Admin (PXE) and Public switch||1||1Gb switch with VLANs configured to support both networks|
|Ethernet Switch||1||Mellanox SX1700 SDN 36 port switch configured in Ethernet mode.|
16 x 1Gb CAT-6e for Admin (PXE) and Public networks
7 x 56GbE copper cables up to 2m (MC2207130-XXX)
Note: You can use Mellanox ConnectX-3 Pro EN (MCX313A-BCCT) or Mellanox ConnectX-3 Pro VPI (MCX353-FCCT) adapter cards.
Note: Please make sure that the Mellanox switch is set as Ethernet.
Storage server RAID Setup
- 2 SSD drives in bays 0-1 configured in RAID-1 (Mirror) is used for the OS.
- 22 SSD drives in bays 3-24 configured in RAID-10 is used as a Cinder volume and will be configured on the RAID drive.
Network Physical Setup
1. Connect all nodes to the Admin (PXE) 1GbE switch (preferably through the eth0 interface on board).
It is recommended to write the MAC address of the Controller and Storage servers to make Cloud installation easier (see Controller Node section below in Nodes tab).
Note: All cloud servers should be configured to run PXE boot over the Admin (PXE) network.
2. Connect all nodes to the Public 1GbE switch (preferably through the eth1 interface on board).
3. Connect port #1 (eth0) of ConnectX-3 Pro to SX1710 Ethernet switch (Private, Management, Storage networks).
Note: The interface names (eth0, eth1, p2p1, etc.) may vary between servers from different vendors.
Note: Port bonding is not supported when using SR-IOV over the ConnectX-3 adapter family.
Rack Setup Example:
Compute and Controller Nodes:
4. Configure the required VLANs and enable flow control on the Ethernet switch ports.
All related VLANs should be enabled on the 40/56GbE switch (Private, Management, Storage networks).
On Mellanox switches, use the command flow below to enable VLANs (e.g. VLAN 1-100 on all ports).
Note: Refer to the MLNX-OS User Manual to get familiar with switch software (located at support.mellanox.com).
Note: Before starting use of the Mellanox switch, it is recommended to upgrade the switch to the latest MLNX-OS version.
switch > enable
switch # configure terminal
switch (config) # vlan 1-100
switch (config vlan 1-100) # exit
switch (config) # interface ethernet 1/1 switchport mode hybrid
switch (config) # interface ethernet 1/1 switchport hybrid allowed-vlan all
switch (config) # interface ethernet 1/2 switchport mode hybrid
switch (config) # interface ethernet 1/2 switchport hybrid allowed-vlan all
switch (config) # interface ethernet 1/36 switchport mode hybrid
switch (config) # interface ethernet 1/36 switchport hybrid allowed-vlan all
Flow control is required when running iSER (RDMA over RoCE - Ethernet). On Mellanox switches, run the following command to enable flow control on the switches (on all ports in this example):
switch (config) # interface ethernet 1/1-1/36 flowcontrol receive on force
switch (config) # interface ethernet 1/1-1/36 flowcontrol send on force
To save the configuration (permanently), run:
switch (config) # configuration write
Note: Flow control (global pause) is normally enabled by default on the servers. If it is disabled, run:
# ethtool -A <interface-name> rx on tx on
Networks Allocation (Example)
The example in this post is based on the network allocation defined in this table:
|Admin (PXE)||10.20.0.0/24||N/A||The network is used to provision and manage Cloud nodes by the Fuel Master. The network is enclosed within a 1Gb switch and has no routing outside. 10.20.0.0/24 is the default Fuel subnet and we use it with no changes.|
|Management||192.168.0.0/24||N/A||This is the Cloud Management network. The network uses VLAN 2 in SX1710 over 40/56Gb interconnect. 192.168.0.0/24 is the default Fuel subnet and we use it with no changes.|
|Storage||192.168.1.0/24||N/A||This network is used to provide storage services. The network uses VLAN 3 in SX1710 over 40/56Gb interconnect. 192.168.1.0/24 is the default Fuel subnet and we use it with no changes.|
|Public and Neutron L3||10.7.208.0/24||10.7.208.1|
Public network is used to connect Cloud nodes to an external network.
Neutron L3 is used to provide Floating IP for tenant VMs.
Both networks are represented by IP ranges within same subnet with routing to external networks.
All Cloud nodes will have Public IP address. In additional you shall allocate 2 more Public IP addressees ·
We do not use virtual router in our deployment but still need to reserve Public IP address for it. So Public Network range is an amount of cloud nodes + 2. For our example with 7 Cloud nodes we need 9 IPs in Public network range.
Note: Consider a larger range if you are planning to add more servers to the cloud later.
In our build we will use 10.7.208.53 >> 10.7.208.76 IP range for both Public and Neutron L3.
IP allocation will be as follows:
Below scheme illustrates our setup IP allocation.
Install the Fuel Master from ISO Image:
Note: Avoid starting the other nodes except for the fuel master until Mellanox plugin is installed.
1. Boot Fuel Master Server from the ISO image as a virtual DVD (click here for the image).
2. After the reboot, boot from the local disk. The Fuel menu window will start.
3. Network setup:
a. Configure eth0 - PXE (Admin) network interface.
b. Configure eth1 – Public network interface.
4. Set the PXE Setup.
The PXE network is enclosed within the switch. Do not make any changes, proceed with defaults.
5. Set the Time Sync.
- Check NTP availability (e.g. 0.asia.pool.ntp.org) via Time Sync tab on the left.
- Configure NTP server entries suitable for your infrastructure.
- Press Check to verify settings.
6. Proceed with the installation.
- Navigate to Quit Setup and select Save and Quit.
Once the Fuel installation is done, you are provided with Fuel access details both for SSH and HTTP.
7. Access Fuel Web UI by http://10.7.208.53:8000. Use "admin" for both login and password.
For SSH access use username root and password r00tme
Install Mellanox Plugin
Mellanox plugin configures support for Mellanox ConnectX-3 Pro network adapters, enabling high-performance SR-IOV compute traffic networking, iSER (iSCSI) block storage networking which reduces CPU overhead, boosts throughput, reduces latency, and enables network traffic to bypass the software switch layer.
Follow the steps below to install the plugin. For the complete instructions, please refer to: HowTo Install Mellanox OpenStack Plugin for Mirantis Fuel 6.1)
1. Download the Mellanox plugin rpm from here and store it on your Fuel Master server
Note: The Mellanox plugin replaces the current bootstrap image, the original image is backed up in /opt/old_bootstrap_image/
3. Verify that the plugin was successfully installed.
Creating a new OpenStack Environment
Open in WEB browser (for example: http://10.7.208.53:8000) and log into Fuel environment using admin/admin as the username and password.
2. Configure the new environment wizard as follows:
- Name and Release
- Name: TEST
- Release: Icehouse on CentOS 6.5 (2014.1.1-5.1)
- Neutron with VLAN segmentation
- Storage Backend
- Cinder: Default
- Glance : Default
- Additional Services
- Click Create button
Configuring the OpenStack Environment
Mellanox Neutron and iSER storage component Public Network Assignment
To work with SR-IOV mode, select Mellanox Openstack features, Neutron SR-IOV plugin.
To work with iSER transport, select Mellanox Openstack features (if not selected before), iSER protocol for volumes (Cinder)
Note: By default, the number of virtual NICs is 16. Please contact Mellanox support if you want to use more.
Public Network Assignment
Servers Discovery by Fuel
This section assigns Cloud roles to servers. Servers should be discovered by Fuel, hence, make sure the servers are configured for PXE boot over Admin (PXE) network. When done, reboot the servers and wait for them to be discovered. Discovered nodes will be counted in top right corner of the Fuel dashboard.
Now you may add UNALLOCATED NODES to the setup.
First you may add Controller, Storage, and then Compute nodes.
Add Controller Nodes
1. Click Add Node.
2. Identify 3 controller node. Use the last 4 Hexadecimal digits of its MAC address of interface connected to Admin (PXE) network. Assign the node's role to be a Controller node.
Add Storage Node
1. Click Add Node.
2. Identify your storage node. Use the last 4 Hexadecimal digits of its MAC address of interface connected to Admin (PXE) network. In our example this is the only Supermicro server, so identification by vendor is easy. Select this node to be a Storage - Cinder node.
Add Compute Nodes
1. Click Add Node.
2. Select all the nodes that are left and assign them the Compute role.
3. Click Apply Changes.
In this step, each network must be mapped to a physical interface for each node. You can choose and configure multiple nodes in parallel.
In case of HW differences between selected nodes (like the number of network ports), bulk configuration is not allowed. If you do a bulk configuration, the Configure Interfaces button will have an error icon (see below).
The example below allows configuring 6 nodes in parallel. The 7th node (Supermicro storage node) will be configured separately.
In this example, we set the Admin (PXE) network to eth0 and the Public network to eth1.
The Storage, Private and Management networks should run on the ConnectX-3 adapters 40/56GbE port.
1. Click Back To Node List and perform network configuration for Storage Node.
Note: Port bonding is not supported when using SR-IOV over ConnectX-3 Pro adapter family. There is no need to change the defaults for the Controller and Compute nodes unless the changes are required. For the Storage node it is recommended to allocate only high performing RAID as Cinder storage. The small disk shall be allocated to Base System.
5. Click Apply.
Note: In our example, Public network does not use VLAN. If you use VLAN for Public network you should check Use VLAN tagging and set proper VLAN ID.
Public IP range: Consists of nodes IP range and Floating IP range. In our case, the full range is 10.7.208.54-10.7.208.76.
IP range: Configure it to be part of your Public IP range, in this example, we select 10.7.208.54-10.7.208.62.
In this example, we select VLAN 3 for the storage network. The CIDR is left untouched.
In this example, we select VLAN 2 for the management network. The CIDR is left untouched.
Neutron L2 Configuration
In this example, we set the VLAN range to 4-100. It should be aligned with the switch VLAN configuration (above).
The base MAC is left untouched.
Neutron L3 Configuration:
Internal Network: Leave CIDR and Gateway with no changes.
Name servers: Leave DNS servers with no changes.
Click Save Settings at the bottom of page
Click Verify Networks.
You should see the following message: Verification succeeded. Your network is configured correctly. Otherwise, check the log file for troubleshooting.
Note: If your public network runs a DHCP server, you can experience a verification failure as below. If the range selected for the cloud above is not overlapping with DHCP pool, you can ignore this message. If overlap exists, please fix it.
Click the Deploy Changes button and view the installation progress at the nodes tab and view logs.
1. OS installation.
2. OS installation is finished.
3. OPSTK installation on 1st controller.
4. OPSTK install on rest controllers.
5. OPSTK install on Compute and Storage nodes.
6. Installation completed.
2. Check the Select All checkbox.
3. Un-check Platform services functional tests (image with special packages is required).
4. Un-check Launch instance, create snapshot, launch instance from snapshot in Functional tests group.
5. Click Run Tests.
All tests should pass. Otherwise, check the log file for troubleshooting.
You can now safely use the cloud
Click the dashboard link at the top of the page.
Usernames and Passwords:
- Fuel server Dashboard user / password: admin / admin
- Fuel server SSH user / password: root / r00tme
- TestVM SSH user / password: cirros / cubswin:)
- To get controller node CLI permissions run: # source /root/openrc
Prepare Linux VM Image for CloudX:
In order to have network and RoCE support on the VM, MLNX_OFED (2.2-1 or later) should be installed on the VM environment.
MLNX_OFED may be downloaded from http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers
(In case of CentOS/RHEL OS, you can use virt-manager to open existing VM image and perform MLNX_OFED installation).
Link to the Bug (in Launchpad)
The default number of supported virtual functions (VFs),16, is not sufficient.
To have more vNICs available, contact Mellanox Support.
Snapshot creation of running instance fails
To work this issue around, shut down the instance before taking a snapshot
3rd party adapters based on the Mellanox chipset may not have SR-IOV enabled by default
Apply to the device manufacturer for configuration instructions and for the required firmware.
Bonding is not supported for SR-IOV over the Mellanox ConnectX-3 network adapters family
Fuel is not starting deployment, it argues that Public IP range is not enough
Increase Public IP range on 1.