NEO Plugin for Nutanix CE - Installation and Usage

Version 22

    This post discusses Mellanox NEO support for automatic switch provisioning for Nutanix Community Edition (CE).

     

    References

     

    Definitions, Acronyms and Abbreviations

    • Nutanix Cluster: cluster of nodes that have Nutanix OS installed.
    • Nutanix Node: a hypervisor server that has Nutanix OS installed.
    • CVM: Controller Virtual Machine. Each cluster node has CVM. Commands can be executed in CVM to take effect on the node.
    • Prism: Prism is the web interface of Nutanix Cluster for network configuration, create VMs, etc. It can be accessed using any CVM IP address; https://<CVM_IP>:9440/console .
    • Nutanix CE: Nutanix Community Edition.

     

    Nutanix-NEO Plugin

    Nutanix-NEO plugin's purpose is to sync between a Nutanix cluster with Mellanox switches using the Mellanox NEO platform. By using this plugin, a user can start a service to listen for Nutanix cluster's events and have the infrastructure VLANs provisioned transparently. The plugin can be installed and run on any RH / CentOS server (version 6 or above) that has connectivity to both Nutanix Prism and the NEO API (including the NEO server, itself).

     

    Nutanix Supported Topology on PRISM

     

    topology.PNG

     

    Key Features for the Plugin

    The plugin enables:

    1. Registering and listening to Nutanix cluster events
    2. Supporting auto sync and switch provisioning:
      1. In the following events:
        1. VM Creation
        2. VM Deletion
        3. VM (Live / non live) migration
        4. Periodically
        5. v. Upon service start
      2. For the following switch port modes:
        1. Eth
        2. LAG
        3. MLAG (single / multi switch)
      1. For the following switch systems:
        1. Spectrum (X86)
        2. SwitchX (PPC)
    1. Supporting a resiliency mechanism for handling NEO requests

     

    Requirements

    • A Nutanix cluster installed over a minimum of three physical servers with following aspects:
      • High speed NICs (10 GB each) connected to Mellanox switches.
      • A minimum of 32GB RAM, in order to enable a V3 API in a Nutanix Cluster.
      • A Nutanix Asterix official edition, or higher.
    • Nutanix OS supports the ConnectX-3 Adapters Family drivers. If another adapter is used, please install the appropriate drivers (Mellanox OFED or inbox).
    • NEO version 1.8 or above, installed in the network.
    • LLDP installed and enabled between Nutanix cluster's nodes and Mellanox switches.
    • SNMP and LLDP enabled on Mellanox switches.
    • Nutanix-NEO plugin installed, configured and running.

     

    Typical Configuration

     

    Prerequisites

    A Nutanix cluster should be available with its nodes connected together via a Mellanox switch with management connectivity. A private network on the Mellanox switch should be holding all the cluster nodes’ ports. Nutanix-created VMs should be booted over that network.

     

    Cluster Nodes

    1. Install MLNX_OFED on cluster nodes (Optional).

    Since Nutanix OS supports ConnectX-3 Adapters family drivers, please install MLNX_OFED relevant drivers for the cluster nodes’ that are using other Mellanox NICs if they belong to a higher family than ConnectX-3 Adapters.

     

    2. Identify the private network.

    Identify a new network range and a netmask, for example 10.0.0.*/24.

     

    3. Create a new bridge br1 with a bond bond1 in all cluster host nodes, bonding both NICs connected to the Mellanox switch (eth1, eth2):

    # ovs-vsctl add-br br1

    # ovs-vsctl add-bond br1 bond1 eth1 eth2

    # ifconfig br1 10.0.0.100 netmaks 255.255.255.0 up

     

    4. To validate Mellanox switch network connections, please ping between the cluster’s nodes over private network created using IPs assigned to their bridges.

     

    5. Install LLDP on the hosts. To make SNMP/LLDP work in Nutanix CE, follow these instructions:

    • Enable LLDP on the switches of the environment.
    • Add the switches to the Nutanix web UI (Prism) [wrench symbol on the right -> Network switch ].

    add switch.PNG

    • Make sure LLDP is installed on all hosts and running. Check if the lldpctl command is installed. If it is not, install the LLDP package using the instructions described in the following link:

    # yum install $link

     

    5. Create the Nutanix cluster's network. On any CVM, execute the following network creation command:

    # acli net.create br1_vlan99 vswitch_name=br1 vlan=99

     

    Using Prism [wrench symbol on the right -> Network Configuration ], please edit the new network and identify the IP ranges.

     

    add network.PNG

     

    Note:

    For MLAG configurations when you have more than one switch with MLAG configuration, add the following:

    • Configure MLAG on switches with lacp configurations.
    • In bond creation stage, add the lacp configuration as well as follows:

    # ovs-vsctl add-bond br1 bond1 eth1 eth2 lacp=active bond_mode=balance-tcp

    Nutanix-NEO VM

    Make sure to have s python-requests package installed on your machine before installing the plugin.

     

    Installation

    Download the plugin from here and install the Nutanix-NEO rpm:

     

    # rpm –ivh nutanix-neo-1-1.0.0.x86_64.rpm

    Usage

    1. Fill required details in the plugin configuration file /opt/nutanix-neo/config/nutanix-neo-plugin.cfg:

    # Section: NEO server info

    # username:         (required) NEO username

    # ip:               (required) NEO server ip

    # password:         (required) NEO password

    # session_timeout:  (required) timeout of user session (default 86400)

    # timeout:          (required) timeout of connection (default 10)

    [NEO]

    username = admin

    ip = 10.209.39.237

    password = 123456

    session_timeout = 86400

    timeout = 10

     

    # Section: Nutanix cluster info

    # username: (required) Nutanix cluster username

    # cvm_ip:   (required) one of the cluster's CVMs ip

    # password: (required) Nutanix cluster user password

    [NUTANIX]

    username = admin

    cvm_ip = 10.209.25.50

    password = nutanixadmin

     

    # Section: Server where plugin installed

    # ip:   (optional) IP of the server from the same subnet as the Nutanix Cluster.

    #       if the ip left empty, then the plugin will obtain the server's interface

    #       ip that is connected to same network as Nutanix cluster.

    # port: (required) TCP port on server should be unused to receive events from

    #       from Nutanix cluster.

    #       if the port is used, then the plugin will kill the process that uses

    #       the port and reclaim it.

    [SERVER]

    ip =

    port = 8080

     

    2. Start the service after installing it:

     

    # service nutanix-neo start

     

    3. Create VMs either using Nutanix CLI as explained here or using PRISM UI https://CVM_IP:9440/console/#login as follows:

    1. Make sure that you remove CDROM from the disks section to allow VM’s migration.
    2. Add a disk with an image (Mellanox image is the default).
    3. Add a NIC with an IP in the relevant range of the Nutanix cluster network.

    new vm.PNG

     

    4. Nutanix-NEO plugin is responsible for applying the required changes in VLANs and interfaces assignments to specific VLANs through NEO API’s. If two VMs are created over the same network, PING must work between the two VMs despite the Nutanix node hosting it in the cluster.

     

    5. If a VM is being migrated, the plugin will apply the changes required on the switch to maintain connectivity.

     

    6. The service can detect only running VMs.

     

    7. In order for VMs to access the public network, the VMs should be created over a VLAN that is already configured on the switch side to have the public connectivity. This command can run on any CVM:

    $ acli net.create net_name vlan=99 ip_config=10.0.0.1/24 vswitch_name=br1

     

    Plugin Debug Files

    • /opt/nutanix-neo/console.log
    • /opt/nutanix-neo/nutanix_neo_server.log

     

    Known Issues (for Nutanix Community Edition Only)

    1. Missing "Link encap" in the output of running ifconfig command on the Nutanix cluster hosts. To correct this problem, replace the package with another that has the desired output:

    # scp nutanix@192.168.5.2:/sbin/ifconfig /usr/sbin/

     

    2. On all hosts, rename all interfaces using the [eth%D] nomenclature by replacing %D with a single digit.