This post discusses Mellanox NEO support for automatic switch provisioning for Nutanix Appliance.
The Mellanox NEO/Nutanix Prism plug-in is a software add-on that offers enhanced functionality to Mellanox and Nutanix customers.
Nutanix Prism offers enhanced network capabilities, including a set of APIs to use Prism's accumulated VM data. Mellanox uses these new APIs to develop an integrated solution between Nutanix Prism and Mellanox NEO, which adds network automation for Nutanix Virtual Machine life-cycle management.
This integration addresses the most common use-cases of the Nutanix hyperconverged cloud, VLAN auto-provisioning on Mellanox switches for Nutanix VM creation, migration and deletion.
Mellanox NEO/Nutanix Prism plug-in purpose is to synchronize between a Nutanix cluster deployed with Mellanox switches, using the Mellanox NEO platform. By using this plug-in, users can start a service to listen to Nutanix cluster's events, and have the infrastructure VLANs provisioned transparently. The plug-in can be installed and run on any RHEL/CentOS server (v6 or above) that has connectivity to both Nutanix Prism and the NEO API (including the NEO server itself).
Nutanix Prism Network Topology Example
Definitions, Acronyms and Abbreviations
- Nutanix Cluster: A group of nodes that have Nutanix AOS installed.
- Nutanix Node: A hypervisor server that has Nutanix AOS installed.
- CVM: Controller Virtual Machine. Each cluster node has CVM. Commands can be executed in CVM to take effect on the node.
- Prism: Prism is the web interface of Nutanix Cluster for network configuration, VMs creation, etc. It can be accessed using any CVM IP address; https://<CVM_IP>:9440/console.
- Nutanix AOS: Nutanix Acropolis Operating System.
|Subscribe to Nutanix Prism Event Notifications API||1-1.0.0||1.8.0|
|NEO HA support||1-1.2.0||1.8.0|
|Auto-discovery of Nutanix hosts and Mellanox switches||1-1.2.0||1.8.0|
|Support for auto-sync and switch auto-provisioning for:|
|VM (Live / non live) migration||1-1.0.0||1.8.0|
|Upon service start||1-1.0.0||1.8.0|
|Supporting auto sync and switch auto-provisioning for the following port modes:|
|Support for auto-sync and switch auto-provisioning for the following Mellanox switch systems:|
|Spectrum based switch systems||1-1.0.0||1.8.0|
|SwitchX-2 based switch systems||1-1.0.0||1.8.0|
|Cumulus OS (v3.2.0+)||1-1.0.0||1.9.0|
Make sure the following requirements are fulfilled in order for the Nutanix Prism plug-in to work properly:
- Nutanix Appliance v220.127.116.11 or above installed over cluster nodes with aspects documented in Nutanix website.
- NEO v1.8 or above installed in the network
- SNMP and LLDP enabled on Mellanox switches
- Mellanox NEO/Nutanix Prism plug-in installed, configured and running
- All Nutanix nodes connected to the Mellanox switch
- IP connectivity between Prism and the NEO VM where the plug-in is installed
Cluster Nodes Configuration
1.Enable LLDP on the switches of the environment:
switch (config) # lldp
2. Add the switches to the Nutanix Prism web UI. Click the wrench symbol on the right -> Network switch.
The switch will be discovered and will appear in the switch table as follows:
3. Create the Nutanix cluster network using Prism. Click the wrench symbol on the right -> Network Configuration. Make sure to edit the new network and identify the IP ranges if needed.
4. Assign a virtual IP to the Nutanix cluster. Click the wrench symbol on the right -> Cluster Details -> CLUSTER VIRTUAL IP ADDRESS.
For further information about Nutanix cluster network configuration, please refer to Nutanix Connect Blog and documentation.
Note: When there is more than one connection to the same switch, configure the LAG as follows:
1. Configure Link Aggregation Control Protocol (LACP) on the switch. See HowTo Configure LACP on Mellanox Switches.
2. Add the LACP configuration as follows:
i. For LACP bond type:
# ovs-vsctl set port br0-up lacp=active
ii. For static bond type (NO LACP):
# ovs-vsctl set port br0-up lacp=off
NEO Virtual Machine Configuration
Add the Nutanix cluster connected switches to the NEO devices using NEO UI. See HowTo Add Network Element in Mellanox NEO.
Installing Nutanix Prism Plug-in
Download the plug-in from MyMellanox portal, and install the Mellanox NEO/Nutanix Prism plug-in rpm. Run:
# yum install nutanix-neo-1-1.2.0.x86_64.rpm
Nutanix Prism Plug-in Usage
1. Fill in the required details in the plug-in configuration file: /opt/nutanix-neo/config/nutanix-neo-plugin.cfg:
# Section: NEO server info
# username: (required) NEO username
# ip: (required) NEO server IP or NEO virtual IP in case
# of NEO HA
# password: (required) NEO password
# session_timeout: (required) timeout of user session (default 86400)
# timeout: (required) timeout of connection (default 10)
# auto_discovery: (required) auto add switches that are discovered in
# Nutanix cluster to NEO. Should be boolean
# add_host_credentials: (optional) set Nutanix host's username and password
# in NEO
# host_ssh_username: (optional) SSH login username for Nutanix hosts, required if
# add_host_credentials is True
# host_ssh_password: (optional) SSH login password for Nutanix hosts, required if
# add_host_credentials is True
username = admin
password = 123456
session_timeout = 86400
timeout = 10
auto_discovery = true
add_host_credentials = true
host_ssh_username = root
host_ssh_password = nutanix/4u
# Section: Nutanix cluster info
# username: (required) Nutanix cluster username
# ip: (required) CVM IP or Virtual IP of Nutanix cluster
# password: (required) Nutanix cluster user password
# requests_retries: (required) maximum API requests retries
username = admin
requests_retries = 40
# Section: Server where plugin installed
# ip: (optional) IP of the server from the same subnet as the Nutanix Cluster.
# if the ip left empty, then the plugin will obtain the server's interface
# ip that is connected to same network as Nutanix cluster.
# port: (required) TCP port on server should be unused to receive events from
# from Nutanix cluster.
# if the port is used, then the plugin will kill the process that uses
# the port and reclaim it.
port = 8080
# Section: plugin common configuration
# logging_level: (optional) the severity level of logging to the log file,
# can be one of [DEBUG, INFO, WARNING, ERROR], default value is 'INFO'
logging_level = DEBUG
2. Start the service after installing the plug-in:
# service nutanix-neo start
3. The service will now apply the required changes in VLANs with interfaces assigned to them through NEO APIs.
If two VMs are created over the same network, ping must work between the two VMs despite the Nutanix node hosting it in the cluster. In case of VM migration, the plug-in will apply the changes required on the switch to maintain connectivity.
The user creates a VM on the Nutanix cluster over a new VLAN ID 99 that did not exist on the switch before. Prism web UI will show the following output:
The Nutanix-NEO service sends APIs to NEO to implement a job that will update the switch side about adding VLAN ID 99 to the switch with relevant Nutanix host ports, as follows:
VM will be migrated to an automatically chosen host through Nutanix Prism web UI, as follows:
The Nutanix-NEO service will send APIs to NEO to trigger jobs that will update the switch side with the new changes (adding VLAN ID for new host details/removing VLAN ID from old host details) as follows:
Note: Nutanix-NEO service only detects running VMs.
For troubleshooting, check the following log files: