L3 network design with Ansible(DRAFT)

Version 2

    This post describes how-to configure Layer 2 Leaf-Spine topology with Mellanox NEO for Small Enterprise datacenters.


    As new applications are constantly evolving, data centers must be flexible and future proof to meet the demand for higher throughput and higher scalability.

    Traditionally, aggregation switching in data centers of Small Business (SB) enterprises has been based on modular switches.

    The Mellanox Virtual Modular Switch® solution (VMS) provides an ideal, optimized approach for aggregating racks.

    Configuration of the VMS infrastructure is performed by the VMS Wizard via Mellanox NEO UI.


    Mellanox components overview and benefits

    Mellanox Spectrum switch family provides the most efficient network solution for the ever-increasing performance demands of Data Center applications.

    Mellanox ConnectX network adapter family deliver industry-leading connectivity for performance-driven server and storage applications. These ConnectX adapter cards enable high bandwidth coupled with ultra-low latency for diverse applications and systems, resulting in faster access and real-time responses.

    Mellanox NEO™ is a powerful platform for managing computing networks. It enables data center operators to efficiently provision, monitor and operate the modern data center fabric.

    Mellanox NEO-Host is a powerful solution for orchestration and management of host networking. NEO-Host is integrated with the Mellanox NEO™ and can be deployed on Linux hosts managed by NEO.


    Related References




    SB Data Center Overview

    SB Data center networks have traditionally been built in Leaf-Spine architecture.

    This architecture, shown below a Leaf-Spine network and is designed to minimize the number of hops between hosts.

         Figure 2. Leaf-Spine network


    This option assumes that the ToR switches are configured with MLAG (L2) while there are only two spines  that configure with MLAG+MAGP (L3).

    Every leaf switch connects to every spine switch, ensuring that all leaf switches are only one hop away from one another in order to minimize latency and chances for bottlenecks in the network.

    Solution Logical diagram

    Please see below solution logical diagram.

    We used NEO management software to provision, configure and monitor our network fabric.


    Solution description

    Leaf-Spine topology: SN2700 as Spine and SN2410 as Leaf switch.

    Allow to scale up to 672 nodes: 14 rack, 48 servers per rack with 3:1 blocking ratio.

    4 x 100GbE connection from Leaf to Spine switches, 2 x 100GbE per Spine switch by QSFP28 100GbE Passive Copper Cables

    Dedicated management port in each Mellanox switch connected to Switch Management Network.

    Single 10/25GbE connection from server to Leaf switch by using the appropriate cable. In our case SFP28 25GbE Passive Copper Cable are used.



    Mellanox NEO must have an access to switch via management networks in order to provision, operate and orchestrate Ethernet fabric.


    In this document we do not cover connectivity to corporate network.

    We strongly recommend to use out-of-band management for Mellanox switches - use dedicated management port on each switch.


    Network Configuration

    NEO Virtual Appliance

    NEO software available for download as CentOS/RedHat installation package as well as Virtual Appliances for various virtualization platforms.

    NEO Virtual Appliance is available in various file formats compatible with leading virtualization platforms including VMware ESXi, Microsoft Hyper-V, Nutanix AHV, Red Hat Virtualization, IBM PowerKVM, and more.


    NEO Logical Schema

    Below provided logical connectivity schema between all Mellanox SW and HW components.

    MOFED and NEO-HOST is the optional Mellanox software components for host installation.




    Downloading Mellanox NEO

    Mellanox NEO is available for download from  Mellanox NEO™ product page.



    You'll be asked to fulfill short form and download instructions will be sent to you email.


    Installing Virtual Appliance

    You are welcome to read the Mellanox NEO Quick Start Guide for detailed installation instructions.

    This Quick Start Guide provides step-by-step instructions for the Mellanox NEO™ software installation and Virtual Appliance deployment.

    In our example we uses NEO Virtual Appliance that installed on Microsoft Hyper-V platform.

    Once NEO VM is deployed  you can connect to appliance console and use the following credentials(default) to login to your VM:

    • Username: root
    • Password: 123456

    After Login you can see appliance information screen like below.

    The MAC address that is assigned to the VM must have DHCP record in order to get an IP address.



    Switch OS installation / configuration

    Please start from the HowTo Get Started with Mellanox switches guide if you are not familiar with Mellanox switch software.

    For more information please refer to the Mellanox Onyx User Manual located at support.mellanox.com or www.mellanox.com -> Switches

    Before starting to use the Mellanox switches, we recommend that you upgrade the switches to the latest Mellanox Onyx™ version.

    You can download it from myMellanox - the Mellanox Support site. Please note, that you need active support subscription.


    Fabric configuration

    In this guide the Ethernet switch fabric is configured as Layer 2 Ethernet network.

    There are two ways to configure switches:

    • CLI based configuration done one-by-one on all switches
    • Wizard based configuration with Mellanox NEO

           If you aren't familiar with Mellanox NEO please refer to Mellanox NEO Solutions.


    Example configuration

    Our example shows multi-rack configuration connectivity of the two Leaf switches to both Spine switches.

    In our example each Leaf switch configured with single VLAN.

    Each Leaf switch connected by 4 cables to both Spine switches - 2 cables per each Spine switch.

    Cross-switch port connectivity table of the our example.


    Interface type

    Spine-1 switch

    Spine-2 switch

    Leaf-1 switch

    Leaf-2 switch


    Ports 1-2


    Ports 49-50


    Ports 1-2

    Ports 51-52

    OSPFPorts 3-4Ports 49-50
    Ports 3-4Ports 51-52


    You can easily extend it by add additional Leaf switch to OSPF area  by connecting corresponding interfaces between Spine's and Leaf switches.


    Below described how to configure Ethernet Switch Fabric by using Mellanox NEO.

    1. Login to Mellanox NEO WEB UI using the following credentials(default):

    • Username: admin
    • Password: 123456

    Mellanox NEO URL can be found in appliance console information screen

    2. Register devices.

         Register all switches via "Add Devices" wizard in Managed Elements.


    3. Configuring Mellanox Onyx Switch for LLDP Discovery

         Run Provisioning "Enable Link Layer Discovery..." from Task tab on all switches.


    4. Creating OSPF Area with setup "L3 Network Provisioning":

         Add "L3 Network Provisioning" service in Virtual Modular Switch service section in Service tab. Fill required fields for complete wizard creation.


    5. After finish configuration please review port status on each switch. All configured and connected ports must glow with green light.