Mellanox CloudX Reference Deployment Guide with RedHat Openstack Platform 9 over Mellanox Ethernet Solution (1 rack, VXLAN + iSER)

Version 31

    Before reading this post, make sure you are familiar with Redhat Openstack 9.0 installation procedures.

     

    Before starting to use the Mellanox switches, we recommend that you upgrade the switches to the latest MLNX-OS version.

    Purpose

    This document helps you to evaluate the RedHat Openstack Platform 9. It focuses on benefits of using Mellanox components, allowing to increase performance and scalability of the solution.

    Definitions/Abbreviation

    Table 1: Abbreviation

    Definitions/AbbreviationDescription
    RDMAA communications protocol that provides transmission of data from the memory of one computer to the memory of another without involving the CPU
    iSERiSCSI over RDMA

     

    Related links

    Table 2: Related Documentation

     

    Introduction

    Solution Overview

    OpenStack continues to be one of the most attractive platforms in the industry. This document describes Mellanox CloudX reference guide for Red Hat Enterprise Linux Openstack Platform version 9.

    Red Hat Enterprise Linux OpenStack Platform is an innovative, cost-effective cloud management solution. It provides the foundation to build a private or public cloud on top of Red Hat Enterprise Linux and offers a scalable, highly available platform for the management of cloud-enabled workloads.

    Red Hat Enterprise Linux OpenStack Platform version 9 is based on the OpenStack Mitaka release and includes other features, such as the Red Hat Enterprise Linux OpenStack Platform installer and support for high availability (HA).

    High performance Cinder storage is boosted with iSER (iSCSI over RDMA) to obtain maximum performance from disk subsystem and reduce CPU overhead while data transfer.

    The intended audience for this document is IT professionals, technical architects, sales engineers, and consultants. Readers are expected to have a basic knowledge of Red Hat Enterprise Linux and OpenStack.

     

    Mellanox components overview

    In this solution Mellanox brings both hardware and software components.

      • Mellanox ConnectX-4 Lx brings VXLAN offload resulting in performance improvement.
      • Mellanox plugin simplifies deployment process and brings most recent stable version of NIC's driver and firmware.
      • Mellanox SN2410 switches provide reliable and high performance Ethernet solution

     

    Main highlights of this example

    • Mellanox SN2410 switches (48 x 25Gb ports + 8 x 100Gb ports)
    • Mellanox ConnectX-4 Lx dual-port NIC for 25Gb uplinks from hosts
    • Tenant and storage networks are placed on different physical networks to improve security and performance
    • 3 cloud controller nodes to obtain HA mode
    • iSER storage brings maximum performance for disk subsystem
    • Tenant and storage networks are separated to different physical networks
    • All cloud nodes connected to Local, External (Public), Tenant, and Storage networks
    • Undercloud is running on RedHat VM on Deployment server

     

     

    Setup diagram

    The following illustration shows the example configuration.

    Figure 1: Setup diagram

     

    Solution configuration

    Setup hardware requirements (example)

    Table 3: Setup hardware requirements

    ComponentQuantityRequirements
    Deployment server1 x

    Select a non-high performing server to run RedHat Director VM:

     

    CPU: Intel E5-26xx or later model

    HD: 100 GB or larger

    RAM: 32 GB or more

    NICs: 2 x 1Gb

    Note: Intel Virtualization technologies must be enabled in BIOS.

    Cloud Controllers and compute servers:

    • 3 x Controllers
    • 3 x Computes
    6 x

    Strong servers to run Cloud control and Tenant’s VM workload that meet the following:

     

    CPU: Intel E5-26xx or later model

    HD: 450 GB or larger

    RAM: 128 GB or more

    NICs:

    • 2 x 1Gb
    • Mellanox ConnectX-4 Lx dual port (MCX4121A-ACAT)  NIC
    Note: Intel Virtualization technologies must be enabled in BIOS
    Cloud Storage server (Cinder Volume)
    1 x

    A high IO performance strong server to act as Cinder backend featuring:

     

    CPU: Intel E5-26xx or later model

    HD:

    • OS: 100 GB or larger
    • Cinder Volume: SSD drives configured in RAID-10 for best performance

    RAM: 64 GB or more

    NICs:

    • 2 x 1Gb
    • Mellanox ConnectX-4 Lx dual port (MCX4121A-ACAT)  NIC
    Private, Storage, management networks switch2 xMellanox SN2410 25/100Gb/s switch
    Local network1 x1Gb L2 switch
    External network1 x1Gb L2 switch
    Cables

    14 x

    1Gb CAT-6e for Local and External networks

    25Gb SFP28 copper cables up to 2m (MCP2M00-AXXX)

    Note: This solution should work also with Mellanox ConnectX-4 VPI dual port 100Gb/s NIC (MCX456A-ECAT) and SN2700 or SN2100 switches

     

    Server configuration

    There are several prerequisites for cloud hardware to work.

    Please go to the BIOS of each node and make sure, that :

    • CPU virtualization is enabled on all nodes, including Deployment server (Intel VT or AMD-V, depending on your CPU type)
    • All nodes except Deployment should be configured to boot from NIC connected to Local network
    • Path between Local and IPMI networks should be routable. This is required to allow Undercloud to access cloud server's IPMI.

     

    Physical network setup

    1. Connect all nodes to the Local 1GbE switch, preferably through the eth0 interface on board.
    2. Connect all nodes to the External (Public) 1GbE switch, preferably through the eth1 interface on board.
    3. Connect Port #1  of ConnectX-4 Lx of all nodes (except deployment server) to first Mellanox SN2410 switch (Tenant network)
    4. Connect Port #2  of ConnectX-4 Lx of all nodes (except deployment server) to second Mellanox SN2410 switch (Storage network)
      Warning: The interface names (eth0, eth1, p2p1, etc.) should be equal on each cloud node.
      We recommend to use servers of same vendors as interface names can vary between servers from different vendors.

     

    Rack Setup Example

    Figure 2: Rack setup wiring example

     

    Deployment server

    Figure 3: Deployment server wiring example

     

    Compute and Controller nodes

    Figure 4: Compute / Controller node wiring example

     

    Storage node

    Connecting similar to Compute and Controller nodes.

    Figure 5: Storage node wiring example

     

    Network Switch Configuration

    Note: Refer to the MLNX-OS User Manual to become familiar with switch software (located at support.mellanox.com).
    Note: Before starting to use of the Mellanox switch, we recommend that you upgrade the switch to the latest MLNX-OS version.

    As this example is using VXLAN network, we don't need any VLAN to be configured.

    For getting maximum performance and stability several steps should be done:

    1. Adjusting MTU on switch ports
      By default, MTU on all ports is 1500. When such package is encapsulated by VXLAN, some additional headers are added and final MTU is 1556.
      Such encapsulated package can't fit default MTU size. As result package is divided in 2 parts and second package contains minimal payload.
      Increasing MTU on switch port can resolve this problem and slightly increase performance.
      Login to the switch by ssh using its IP address
      # ssh admin@switch_ip
      Enable configuration mode
      switch > enable
      switch # configure terminal
      switch (config) #
      Update MTU to the value of 1556
      switch (config) # interface ethernet 1/1-1/48 mtu 1556 force
    2. Enabling Flow Control As this example is using VXLAN network, we don't need VLAN to be configured.
      Flow control is required when running iSER (RDMA over RoCE - Ethernet).

      On Mellanox switches, run the following command to enable flow control on the switches (on all ports in this example):

      switch (config) # interface ethernet 1/1-1/48 flowcontrol receive on force
      switch (config) # interface ethernet 1/1-1/48 flowcontrol send on force

      To save settings, run:
      switch (config) # write memory

     

    Networks Allocation (Example)

    The example in this post is based on the network allocation defined in this table:

    Table 4: Network allocation example

    NetworkSubnet/MaskGatewayNotes
    Local network172.21.1.0/24N/A

    The network is used to provision and manage Cloud nodes by the Undercloud. The network is enclosed within a 1Gb switch and should have routing to IPMI network

    Tenant
    172.17.0.0/24N/AThe network is used to service all VXLAN networks, provisioned by Openstack.
    Storage172.18.0.0/24N/AThis network is used to provide storage services
    External (Public)10.7.208.0/2410.7.208.1This network is used to connect Cloud nodes to an external network. Neutron L3 is used to provide Floating IP for tenant VMs. Both networks are represented by IP ranges within same subnet with routing to external networks.
    All Cloud nodes will have Public IP address. In additional you shall allocate 2 more Public IP addressees ·
    • One IP required for HA functionality
    • Virtual router requires an additional Public IP address
    We do not use a virtual router in our deployment but still need to reserve Public IP address for it. So Public Network range is an amount of cloud nodes + 2. For our example with seven Cloud nodes we need nine IPs in the Public network range.

    Note: Consider a larger range if you plan to add more servers to the cloud later.

    In our build we use 10.7.208.125 >> 10.7.208.148 IP range for both Public and floating ranges.

    IP allocation will be as follows:

    • Deployment server: 10.7.208.125
    • RedHat Director VM: 10.7.208.126
    • Public Range: 10.7.208.127 >> 10.7.208.135 (7 used for physical servers, one reserved for HA and one reserved for virtual router)
    • Floating Range: 10.144.254.136 >> 10.144.254.148 (used for Floating IP pool)

     

    The scheme below illustrates our setup Public IP allocation.

    Figure 6: Public range IP allocation

     

    Environment preparation

    Install the Deployment server

    In our setup we install the CentOS release 7.3 64-bit distribution. We use the CentOS-7-x86_64-Minimal-1511.iso image and installed the minimal configuration. We will install all missing packages later.

    Two 1Gb interfaces are connected Local and External:

    • em1 (first interface of integrated NIC) is connected to Local and configured statically.The configuration will not actually be used, but will save time on bridge creation later.
        • IP: 172.21.1.1
        • Netmask: 255.255.255.0
        • Gateway: N/A
    • em2 (second interface of integrated NIC) is connected to External and configured statically:
        • IP: 10.7.208.125
        • Netmask: 255.255.255.0
        • Gateway: 10.7.208.1

     

    Configure the Deployment server to run the RedHat Director VM

    Login to the Deployment server by SSH or locally and perform actions listed below:

    1. Disable the Network Manager.
      # sudo systemctl stop NetworkManager.service
      # sudo systemctl disable NetworkManager.service
    2. Install packages required for virtualization.
      # sudo yum install qemu-kvm libvirt libvirt-python libguestfs-tools virt-install virt-manager
    3. Install packages required for x-server.
      # sudo yum install xorg-x11-xauth xorg-x11-server-utils xclock
    4. Reboot the Deployment server.

     

    Create and configure the new VM for RedHat Director

    1. Start the virt-manager.
      # virt-manager
    2. Create a new RedHat VM using the virt-manager wizard
    3. During the creation process provide VM with at least 8 cores, 16GB of RAM and 40GB disk.
    4. Mount the RedHat installation disk to VM virtual CD-ROM device.
    5. Configure the network so the RedHat VM has two NICs connected to Local and External networks.
      1. Use virt-manager to create bridges:
        1. br-em1, the Local network bridge used to connect RedHat VM's eth0 to Local network.
        2. br-em2, the External network bridge used to connect RedHat VM's eth1 to External network.
      2. Connect the RedHat VM eth0 interface to br-em1.
      3. Add to the RedHat VM eth1 network interface and connect it to br-em2.

        Figure 7: Bridges configuration schema

        Note: You can define any other names for bridges. In this example names were defined them to match names of the physical interfaces that are connected to the Deployment server.
    6. Save settings

    RedHat VM installation

    1. Download RHEL 7.3 installation ISO
    2. Install OS using installation wizard in MINIMAL configuration
    3. During OS installation, please assign External (Public) and Local IPs to VM interfaces:
      • eth0 assigned with Local IP 172.21.1.2 (mask 255.255.255.0)
      • eth1 assigned with External IP 10.7.208.126 (mask 255.255.255.0)

     

    Prepare VM for installing Undercloud

    1. Creating a stack user
      # useradd stack
      # passwd stack
      # echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack
      # chmod 0440 /etc/sudoers.d/stack
      # su - stack
    2. Adding a workaround for Director node boot failure after undercloud installation:
      1. Creating new home directory
        $ sudo mkdir /deploy_directory
        $ sudo chown stack:stack /deploy_directory
      2. Changing the home user directory from /home/stack to new path:
        $ sudo sed -i 's/\/home\/stack/\/deploy_directory/' /etc/passwd
      3. Copy bash scripts from home.
        $ cp /home/stack/.bash* /deploy_directory/
    3. Setting hostname for the system
      $ sudo hostnamectl set-hostname <hostname.hostdomain>
      $ sudo hostnamectl set-hostname --transient <hostname.hostdomain>
    4. Edit the “/etc/hosts” file:
      127.0.0.1   hostname.hostdomain hostname localhost localhost.localdomain localhost4 localhost4.localdomain4
    5. Registering the system at RedHat using valid RedHat account
      $ subscription-manager register --username RHEL_USER --password RHEL_PASS --auto-attach
    6. Enabling required packages
      $ sudo subscription-manager repos --disable=*
      $ sudo subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-openstack-9-rpms --enable=rhel-7-server-openstack-9-director-rpms --enable=rhel-7-server-rh-common-rpms
      $ sudo yum update –y
    7. Reboot the system

     

    Attention: after reboot all next steps should be performed as stack user

     

    Cloud deployment

    Installing the Undercloud

    1. Login to RedHat VM as root and became stack user
      # su - stack
    2. Creating directories for the images and templates
      $ mkdir ~/images
      $ mkdir ~/templates
    3. Download Mellanox OFED 4.0-1.0.1.0 ISO image and save it to /tmp folder on your RedHat VM
      $ sudo yum install wget
      $ cd /tmp
      $ wget  wget http://www.mellanox.com/downloads/ofed/MLNX_OFED-4.0-1.0.1.0/MLNX_OFED_LINUX-4.0-1.0.1.0-rhel7.3-x86_64.iso
      Or you can use web browser to download Mellanox OFED from http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers

      Figure 8: Mellanox OFED version to be downloaded

    4. Installing the Director packages
      $ sudo yum install -y python-tripleoclient
    5. Create Undercloud configuration file from template

      $ cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf
    6. Edit the parameters according to example below
      $ vi ~/undercloud.conf
      For the minimal configuration of undercloud we can leave all the commented lines as is and just change the “local_interface” parameter.
      See example below:

      [DEFAULT]
      local_ip                   = 172.21.1.254/24
      undercloud_public_vip      = 172.21.1.253
      undercloud_admin_vip       = 172.21.1.252
      local_interface            = eth0
      masquerade_network         = 172.21.1.0/24
      dhcp_start                 = 172.21.1.150
      dhcp_end                   = 172.21.1.199
      network_cidr               = 172.21.1.0/24
      network_gateway            = 172.21.1.254
      discovery_interface        = br-ctlplane
      discovery_iprange          = 172.21.1.100,172.21.1.149
      undercloud_debug           = true


      The basic template contains the following parameters:
      1. local_ip
        The IP address defined for the director’s Provisioning NIC. This is also the IP address the director uses for its DHCP and PXE boot services. Leave this value as the default 192.0.2.1/24 unless you are using a different subnet for the Provisioning network, for example, if it conflicts with an existing IP address or subnet in your environment.
      2. network_gateway
        The gateway for the Overcloud instances. This is the Undercloud host, which forwards traffic to the External network. Leave this as the default 192.0.2.1 unless you are either using a different IP address for the director or want to directly use an external gateway.
      3. undercloud_public_vip
        The IP address defined for the director’s Public API. Use an IP address on the Provisioning network that does not conflict with any other IP addresses or address ranges. For example, 192.0.2.2. The director configuration attaches this IP address to its software bridge as a routed IP address, which uses the /32 netmask.
      4. undercloud_admin_vip
        The IP address defined for the director’s Admin API. Use an IP address on the Provisioning network that does not conflict with any other IP addresses or address ranges. For example, 192.0.2.3. The director configuration attaches this IP address to its software bridge as a routed IP address, which uses the /32 netmask.
      5. undercloud_service_certificate
        The location and filename of the certificate for OpenStack SSL communication. Ideally, you obtain this certificate from a trusted certificate authority. Otherwise generate your own self-signed certificate using the guidelines in Appendix A, SSL/TLS Certificate Configuration. These guidelines also contain instructions on setting the SELinux context for your certificate, whether self-signed or from an authority.
      6. local_interface
        The chosen interface for the director’s Provisioning NIC. This is also the device the director uses for its DHCP and PXE boot services. Change this value to your chosen device.
      7. network_cidr
        The network that the director uses to manage Overcloud instances. This is the Provisioning network. Leave this as the default 192.0.2.0/24 unless you are using a different subnet for the Provisioning network.
      8. masquerade_network
        Defines the network that will masquerade for external access. This provides the Provisioning network with a degree of network address translation (NAT) so that it has external access through the director. Leave this as the default (192.0.2.0/24) unless you are using a different subnet for the Provisioning network.
      9. dhcp_start; dhcp_end
        The start and end of the DHCP allocation range for Overcloud nodes. Ensure this range contains enough IP addresses to allocate your nodes.
      10. inspection_interface
        The bridge the director uses for node introspection. This is custom bridge that the director configuration creates. The LOCAL_INTERFACE attaches to this bridge. Leave this as the default br-ctlplane.
      11. inspection_iprange
        A range of IP address that the director’s introspection service uses during the PXE boot and provisioning process. Use comma-separated values to define the start and end of this range. For example, 192.0.2.100,192.0.2.120. Make sure this range contains enough IP addresses for your nodes and does not conflict with the range for dhcp_start and dhcp_end.
    7. Installing the undercloud
      $ openstack undercloud install
      Import parameters and variables to use the undercloud commands line tools
      $ source ~/stackrc

      Preparing and configuring the Overcloud installation.

      1. Obtaining the overcloud images and uploading them
        $ sudo yum install rhosp-director-images rhosp-director-images-ipa
        $ cd ~/images
        $ tar -xvf /usr/share/rhosp-director-images/overcloud-full-latest-9.0.tar
        $ tar -xvf /usr/share/rhosp-director-images/ironic-python-agent-latest-9.0.tar
        $ openstack overcloud image upload --image-path ~/images/
      2. Setting a nameserver on the undercloud neutron subnet
        $ neutron subnet-update `neutron subnet-list | grep '|'| tail -1 |  cut -d'|' -f2` --dns-nameserver 8.8.8.8
      3. Obtaining  the basic TripleO heat template
        $ cp -r /usr/share/openstack-tripleo-heat-templates ~/templates
      4. Registering ironic nodes
        Prepare the json file ~/instackenv.json to be like example below and include all cloud nodes:

        {
            "nodes":[
                {
                    "mac":[
                        "bb:bb:bb:bb:bb:bb"
                    ],
                    "cpu":"4",
                    "memory":"6144",
                    "disk":"40",
                    "arch":"x86_64",
                    "pm_type":"pxe_ipmitool",
                    "pm_user":"ADMIN",
                    "pm_password":"ADMIN",
                    "pm_addr":"192.0.2.205"
                },
                {
                    "mac":[
                        "cc:cc:cc:cc:cc:cc"
                    ],
                    "cpu":"4",
                    "memory":"6144",
                    "disk":"40",
                    "arch":"x86_64",
                    "pm_type":"pxe_ipmitool",
                    "pm_user":"ADMIN",
                    "pm_password":"ADMIN",
                    "pm_addr":"192.0.2.206"
                }
            ]
        }

      5. Import the json file to the director and start the introspection process
        This process turns servers on, inspects the hardware and turns them off.
        It can take 10 - 30 minutes depending on amount of nodes and their performance.

        $ openstack baremetal import --json ~/instackenv.json

        $ openstack baremetal configure boot

        $ openstack baremetal introspection bulk start

      6. To see provisioned nodes list use command:

        $ ironic node-list


        Figure 9: List of nodes, ready for deployment
        If state of nodes is same as on screenshot, they are ready to proceed

        • Power state should be Power off

        • Provisioning state should be available
        • Maintenance should be false
      7. Inspect nodes' hardware:
        1. Identify nodes one by one using command

          $ ironic node-show <UUID>

          Figure 10: Detailed node's info
          Nodes can be identified by IPMI IP address

           

        2. Assign roles to the nodes
          First identify UUID of desired server by its IPMI IP address
          $ ironic node-list --fields uuid  driver_info | grep 10.7.214.205 |awk '{print $2}'

          7dcf575e-b18c-4bab-b259-fb936b3fcfae

          Then assign required role profile to the node using UUID

          $ ironic node-update 7dcf575e-b18c-4bab-b259-fb936b3fcfae add properties/capabilities='profile:PROFILE,boot_option:local'

          Profiles used in our deployment :

          • compute

          • control

          • block-storage

        3. Validate that assignment passed normaly
          $ openstack overcloud profiles list
        4. Define the root disk for nodes
          $ mkdir ~/swift-data
          $ cd ~/swift-data
          $ export IRONIC_DISCOVERD_PASSWORD=`sudo grep admin_password /etc/ironic-inspector/inspector.conf | egrep -v '^#'  | awk '{print $NF}' | cut -d'=' -f2 | cut -d'=' -f2`
          $ for node in $(ironic node-list | grep -v UUID| awk '{print $2}'); do swift -U service:ironic -K $IRONIC_DISCOVERD_PASSWORD download ironic-inspector inspector_data-$node; done
          $ for node in $(ironic node-list | awk '!/UUID/ {print $2}'); do echo $node; serial=`cat inspector_data-$node | jq '.inventory.disks' | awk '/"serial": /{print $2}' | head -1`; ironic node-update $node add properties/root_device='{"serial": '$serial'}'; done
      8. Installation of Mellanox Plugin
        Please download plugin from http://bgate.mellanox.com/openstack/openstack_plugins/red_hat_director_plugins/9.0/plugins/mellanox-rhel-osp-1-1.0.0/mellanox-rhel-osp-1-1.0.0.tar.gz and save it to home folder
        $ cd ~
        $ wget
        http://bgate.mellanox.com/openstack/openstack_plugins/red_hat_director_plugins/9.0/plugins/mellanox-rhel-osp-1-1.1.0/mellanox-rhel-osp-1-1.1.0.tar.gz
        Unpack downloaded package to stack user's home directory
        $ tar -zxvf mellanox-rhel-osp-1-1.1.0.tar.gz
        The package folder will be extracted under the HOME directory under mellanox-rhel-osp.
      9. Check available interfaces and their drivers of all registered ironic nodes
        As result the ~/mellanox-rhel-osp/deployment_sources/environment_conf/interfaces_drivers.yaml file will be created with list of network interfaces details
        $ cd ~/mellanox-rhel-osp/deployment_sources/
        $ ./summarize_interfaces.py --path_to_swift_data ~/swift-data/
        This step is needed to summaries the interfaces names and their driver. It is also required for interface renaming in the case that the nodes are not homogeneous.
      10. Edit the config.yaml file with the parameters and flavor required to create the env template file (~/deployment_sources/environment_conf/config.yaml). This is the main file to configure before deploying TripleO over Mellanox NICs. After configuring this file, TripleO basic configurations file will be generated in this directory.
        You can use this file or further customize it before starting the deployment.
        Note: Pay attention to use correct interface name for Iser configuration. Consult with the ~/mellanox-rhel-osp/deployment_sources/environment_conf/interfaces_drivers.yaml file.
        $ vi ~/mellanox-rhel-osp/deployment_sources/environment_conf/config.yaml
        Sriov:
            sriov_enable: false
            number_of_vfs:
            sriov_interface: 'ens2f0'

        Neutron:
            physnet_vlan: 'default_physnet'
            physnet_vlan_range: '1:1'             # VXLAN will run inside VLAN1
            network_type: 'vxlan'

        Iser:
            iser_enable: true
            storage_network_port: 'ens2f1'

        Neo:
            neo_enable: false
            neo_ip:
            neo_username: 'admin'
            neo_password: 123456

        InterfacesValidation: true
      11. Generate the node's network topology for os-net-config based on the chosen configurations in config.yaml. (run it inside deployment_sources/ folder)
        This script is used to generate <role>.yaml files under network directory.
        It is used in network_env.yaml based on neutron network type in config.yaml in  the folder: <stack homedir>/mellanox-rhel-osp /deployment_sources/network.
        $ python ./create_network_conf.py
      12. Generate environment yaml file to be used in the overcloud deployment:
        $ python ~/mellanox-rhel-osp/deployment_sources/create_conf_template.py
      13. Modify the ~/mellanox-rhel-osp/deployment_sources/network/network_env.yaml to match your environment network settings and new names used in interfaces_drivers yaml file if added.
        It contains information where to take all configuration files, required to procceed with deployment.

        In below example changes are highlighted by red. You may need to change other parameters.

        $ vi ~/mellanox-rhel-osp/deployment_sources/network/network_env.yaml
        . . .

        parameter_defaults:

          # Reguired parameters
          ProvisioningInterface: eno1
          ExternalInterface: eno2
          StorageInterface: ens2f1
          TenantInterface: ens2f0
          NeutronExternalNetworkBridge: "br-ex"
          # Just provide the cider for the required networks (Here are TenantNetCidr, StorageNetCidr, ExternalNetCidr)
          TenantNetCidr: 172.17.0.0/24
          StorageNetCidr: 172.18.0.0/24
          ExternalNetCidr: 10.7.208.0/24

          # Just provide the Allocation pools for requierd networks
          TenantAllocationPools: [{'start': '172.17.0.10', 'end': '172.17.0.200'}]
          StorageAllocationPools: [{'start': '172.18.0.10', 'end': '172.18.0.200'}]
        ExternalAllocationPools: [{'start': '10.7.208.127', 'end': '10.7.208.135'}]

          ExternalInterfaceDefaultRoute: 10.7.208.1 # The gateway of the external network
          ControlPlaneDefaultRoute: 172.21.1.254  # The internal IP of the undercloud
          EC2MetadataIp: 172.21.1.254  # The IP of the undercloud
          DnsServers: ["8.8.8.8","8.8.4.4"]
          NtpServer: time.nist.gov
      14. Prepare overcloud image from the folder: ~/mellanox-rhel-osp /deployment_sources
        [stack@vm deployment_sources]$ python prepare_overcloud_image.py
        --iso_ofed_path ISO_OFED_PATH
        --overcloud_images_path OVERCLOUD_IMAGES_PATH
        --mlnx_package_path MLNX_PACKAGE_PATH
        --root_password ROOT_PASSWORD
        --redhat_account_username REDHAT_ACCOUNT_USERNAME
        --redhat_account_password REDHAT_ACCOUNT_PASSWORD

        Example of the command:

        $ ./prepare_overcloud_image.py --iso_ofed_path /tmp/MLNX_OFED_LINUX-4.0-1.0.1.0-rhel7.3-x86_64.iso --overcloud_images_path ~/images/ --root_password <pass> --redhat_account_username <name> --redhat_account_password <pass>
      15. After building overcloud image, introspection should be performed again.
        Execute commands, listed below:
        $ openstack baremetal configure boot
        $ openstack baremetal introspection bulk start

      Deploying the Overcloud.

      Deploying the overcloud
      Find and edit file deploy.sh in the folder ~/mellanox-rhel-osp/deployment_sources

      $ cd ~/mellanox-rhel-osp/deployment_sources
      $ vi deploy.sh

      Adopt control-scale, compute-scale and ceph--storage-scale parameters to match your amount of nodes.
      Chang
      e "-e ./environment_conf/env-template.yaml \" line to "-e ./environment_conf/env.yaml \"

      #!/bin/bash
      . ~/stackrc
      rm -fv ~/.ssh/known_hosts*

      # Please run this script inside the package folder
      openstack \
        overcloud deploy \
        --stack ovcloud \
        --control-scale         3 \
        --compute-scale         3 \
        --block-storage-scale    1 \
        --control-flavor control \
        --compute-flavor compute \
        --block-storage-flavor block-storage \
        --ntp-server time.nist.gov \
        --templates    \
        -e ./environment_conf/env.yaml \
        -e ./network/network_env.yaml \
        -e ./environment_conf/multiple_backends.yam \
      $@

      Run the script:

      $ ./deploy.sh

       

      Post-deployment network configuration (optional)

      Openstack is deployed and is fully operational but there are no virtual network created yet.

      You can create networks manually from Horizon of use below CLI scripts:

       

      For Horizon option login to Horizon as user admin with password located in /deploy_directory/mellanox-rhel-osp/deployment_sources/ovcloudrc file on Director VM as variable OS_PASSWORD

      $ grep OS_PASSWORD  /deploy_directory/mellanox-rhel-osp/deployment_sources/ovcloudrc  | cut -d"=" -f 2

       

      For CLI option you need to login to Director VM as user stack and perform below steps:

      1. Defining environment

        Note: Please do not forget to update those variables to values, proper for your network

        $ source ~/mellanox-rhel-osp/deployment_sources/ovcloudrc
        $ TENANT="admin"
        $ TENANT_ID=$(openstack project list | awk "/\ $TENANT\ / { print \$2 }")
        $ TENANT_NET_CIDR="192.168.1.0/24"
        $ TENANT_NET_GW="192.168.1.1"
        $ TENANT_NET_CIDR_EXT="10.7.208.0/24"
        $ IP_EXT_FIRST=10.7.208.136
        $ IP_EXT_LAST=10.7.208.148
      2. Create the networks with VXLAN

        1. Create internal tenant network

          $ neutron net-create --tenant-id $TENANT_ID --provider:network_type vxlan "$TENANT-net"

          $ TENANT_NET_ID=$(neutron net-list|grep "$TENANT"|grep -v -i dpdk|awk '{print $2}')

        2. Create tenant network for Floating IPs
          $ neutron net-create --tenant-id $TENANT_ID --provider:network_type vxlan --router:external "$TENANT-net-ext"
          $ TENANT_NET_EXT_ID=$(neutron net-list|grep "$TENANT"|grep -i ext|awk '{print $2}')
      3. Create the subnet and get the ID

        1. Create internal tenant subnet

          $ neutron subnet-create --name "$TENANT-subnet" --tenant-id $TENANT_ID "$TENANT-net" $TENANT_NET_CIDR
          $ TENANT_SUBNET_ID=$(neutron subnet-list|grep "$TENANT" |awk '{print $2}')

        2. Create tenant subnet for Floating IPs

          $ neutron subnet-create --name "$TENANT-subnet-ext" --tenant-id $TENANT_ID "$TENANT-net-ext" --allocation_pool start=$IP_EXT_FIRST,end=$IP_EXT_LAST $TENANT_NET_CIDR_EXT

          $ TENANT_SUBNET_EXT_ID=$(neutron subnet-list|grep "$TENANT" |awk '{print $2}')
      4. Create an HA Router and get the ID

        $ neutron router-create --ha True --tenant-id $TENANT_ID "$TENANT-net-to-online_live-extnet"

        $ ROUTER_ID=$(neutron router-list  -f csv -F id -F name | grep "$TENANT-net-to-online_live-extnet" | cut -f1 -d',' | tr -d '"')

      5. Set the gw for the new router
        $ neutron router-gateway-set "$TENANT-net-to-online_live-extnet" $TENANT-net-ext
      6. Add a new interface in the main router
        $ neutron router-interface-add $ROUTER_ID $TENANT_SUBNET_ID

       

      To automate the process you can create the script, make it executable and run it:

      1. Create a script
        $ vi ~/create_networks.sh
      2. Copy-past commands below to that script

        #!/bin/bash
        source ~/mellanox-rhel-osp/deployment_sources/ovcloudrc

         

        TENANT="admin"
        TENANT_ID=$(openstack project list | awk "/\ $TENANT\ / { print \$2 }")
        TENANT_NET_CIDR="192.168.1.0/24"
        TENANT_NET_GW="192.168.1.1"
        TENANT_NET_CIDR_EXT="10.7.208.0/24"
        IP_EXT_FIRST=10.7.208.136
        IP_EXT_LAST=10.7.208.148

         

        # Create the networks with VXLAN

         

        # Create internal tenant network

        neutron net-create --tenant-id $TENANT_ID --provider:network_type vxlan "$TENANT-net"

        TENANT_NET_ID=$(neutron net-list|grep "$TENANT"|grep -v -i dpdk|awk '{print $2}')

         

        # Create tenant network for Floating IPs

        neutron net-create --tenant-id $TENANT_ID --provider:network_type vxlan --router:external "$TENANT-net-ext"

        TENANT_NET_EXT_ID=$(neutron net-list|grep "$TENANT"|grep -i ext|awk '{print $2}')

         

        # Create the subnet and get the ID

        # Create internal tenant subnet

        neutron subnet-create --name "$TENANT-subnet" --tenant-id $TENANT_ID "$TENANT-net" $TENANT_NET_CIDR

        TENANT_SUBNET_ID=$(neutron subnet-list|grep "$TENANT" |awk '{print $2}')

         

        # Create tenant subnet for Floating IPs

        neutron subnet-create --name "$TENANT-subnet-ext" --tenant-id $TENANT_ID "$TENANT-net-ext" --allocation_pool start=$IP_EXT_FIRST,end=$IP_EXT_LAST $TENANT_NET_CIDR_EXT

        TENANT_SUBNET_EXT_ID=$(neutron subnet-list|grep "$TENANT" |awk '{print $2}')

         

        # Create an HA Router and get the ID

        neutron router-create --ha True --tenant-id $TENANT_ID "$TENANT-net-to-online_live-extnet"

        ROUTER_ID=$(neutron router-list  -f csv -F id -F name | grep "$TENANT-net-to-online_live-extnet" | cut -f1 -d',' | tr -d '"')

         

        # Set the gw for the new router

        neutron router-gateway-set "$TENANT-net-to-online_live-extnet" $TENANT-net-ext

         

        # Add a new interface in the main router

        neutron router-interface-add $ROUTER_ID $TENANT_SUBNET_ID

      3. Make it executable

        $ sudo chmod +x ~/create_networks.sh
      4. Run it
        $ sh ~/create_networks.sh

       

      Now your deployment is fully complete and ready.

      For testing purpose you can download Cirros image and build VM using it.

       

      Known issues

      The following are the RHEL-OSP 9 plugin known limitations:

      Table 5: Known issues

      #IssueWorkaround
      1

      deploy.sh example script generates the password files in the deploy directory

      Use your own deployment command or the generated example from the deploy directory.