Single Rack HA Layer 2 MAGP network deployment with Ansible (DRAFT)

Version 27

    This is a configuration guide for a single rack high availability layer 2 (No LACP) network deployment with Ansible for small enterprise datacenters.

    Introduction

    As applications constantly evolve, data centers must be flexible to meet the demand for higher throughput and higher scalability.

    Traditionally, aggregation switching in data centers of small business enterprises has been based on modular switches.

    Mellanox Virtual Modular Switch® (VMS) provides an ideal and optimized approach for aggregating racks.

     

    In this document the Ansible is used to configure L2 Eth network on 2 Top of Rack (TOR) switches, also known as Leaf switches.

    There are 2 additional methods to perform the above:

     

    Related References

     

    Ansible and Mellanox

    The Mellanox network operating system (Mellanox ONYX, successor of MLNX-OS Ethernet) empowers you to shape and deploy your network according to your needs. Mellanox ONYX enables utilizing the full power of the silicon by running Dockers Container and grants full access to the SDK API, which adds more capabilities to the network flexibility and the ability to continuously adapt to evolving markets and technical requirements.

    The integration of Mellanox ONYX with Ansible and Red Hat® Ansible® Tower along with the support of Zero Touch Provisioning, enables automation of provisioning, configuration and maintenance of switches in a reliable manner—allowing to scale out automated deployments to thousands of switches and more. Ansible framework separates the Ansible Playbook & modules from the Ansible engine, enabling you to implement Ansible automation across your entire facility, including network switches, servers, appliances, and more.

    The benefits of Ansible include:

    • Reduced operational expenses via automated provisioning and network maintenance
    • Reduced time to market
    • Minimal manual operation – eliminates configuration & provisioning errors
    • Frees up valuable IT & support resources to focus on mission-critical activities

     

    Mellanox components in the Architecture

    Mellanox Spectrum switch family provides the most efficient network solution for the ever-increasing performance demands of data center applications.

    Mellanox ConnectX network adapter family deliver industry-leading connectivity for performance-driven server and storage applications.

    These ConnectX adapter cards enable high bandwidth coupled with ultra-low latency for diverse applications and systems, resulting in faster access and real-time responses.

    The Mellanox LinkX cables and transceivers product family provides the industry’s most complete line of 10, 25, 40, 50, 100, 200, and 400Gb interconnect products for Cloud, Web 2.0, Enterprise, telco, and storage data center applications. They are often used to link top-of-rack switches downwards to servers, storage & appliances and upwards in switch-to-switch applications.

    In the following design, we demonstrate how to achieve a two top of rack switches topology utilizing Mellanox SN2xxx Series switches.

    Multi-Active Gateway Protocol (MAGP) is designed to resolve the default gateway problem when a host is connected to a set of switch routers (SRs) via MLAG with no LACP control.

    MAGP is Mellanox proprietary protocol that implements active-active VRRP.

    Mellanox Ansible Solutions: https://www.ansible.com/integrations/networks/mellanox

    Solution Benefits

    The combination of the Mellanox ONYX and Ansible solution provides:

    • Best ROI and faster time to market: with full automation, quickly deploy processes in your data center and provision new systems, the network infrastructure and applications. All this in a reliable manner, eliminating manual configuration errors.
    • Simplicity: with a standard, simple automation infrastructure, there’s no need to learn a new programming language nor undergo a learning curve.
    • Unified solution: Ansible is a standard automation tool for both servers and the network, enabling a single point of control.
    • Agentless solution: No new software to install.

     

    Setup Overview

    In the following setup, we only have Access network layer.

    The Access layer is where servers are connected (Often located at the top of a rack).

    The architecture shown below is Layer 2 (L2) and is designed to minimize the number of hops between hosts.

     

    Setup Design

     

     

    Terminology

    • Multichassis Link Aggregation (MLAG) is the ability of two and sometimes more switches to act like a single switch when forming link bundles. This allows a host to uplink to two switches for physical diversity, while still only having a single bundle interface to manage.
    • IPL (Inter Peer Link): This is the link between the two switches. The IPL link is required, and used for control and may be used for traffic in case of port failures. This link serves the most important role of transmitting keep alives between switches such that each switch knows that the other switch is still present. In addition, all mac-sync messages, IGMP groups sync and other DB sync messages are sent across this link. Hence it is critical to enable flow control on this link. Even if there is heavy congestion on this link, the control traffic will still get through.
    • Multi-Active Gateway Protocol (MAGP) designed to resolve the default gateway problem when a host is connected to a set of switch routers (SRs) via MLAG with no LACP control. The network functionality in that case requires that each SR is an active default gateway router to the host, thus reducing hops between the SRs and directly forwarding IP traffic to the L3 cloud regardless which SR traffic comes through.
    • What is the difference between MAGP and VRRP?
      MAGP is Mellanox proprietary protocol that implements active-active VRRP.

     

    Bill of Materials - BOM

    Physical Network Connections

    See below physical connection diagrams for each solution.

    Solution 1 (up to 18 servers)

    Solution description

    A Mellanox SN2010 is used as TOR switch.

    Allow to scale up to 18 nodes in a rack and have total of 4 x 100GbE uplink ports to WAN/LAN connection.

    2 x 100GbE connection between TOR switches by using the QSFP28 100GbE Passive Copper Cables.

    Dedicated management port in each Mellanox switch connected to Switch Management Network.

    Single 25GbE connection from server to each TOR switch by using the SFP28 25GbE Passive Copper Cable.

    Solution 2 (up to 48 servers)

    Solution description

    A Mellanox SN2410 is used as TOR switch.

    Allow to scale up to 48 nodes in a rack and have total of 4 x 100GbE uplink ports to WAN/LAN connection.

    2 x 100GbE connection between TOR switches by using the QSFP28 100GbE Passive Copper Cables.

    Dedicated management port in each Mellanox switch connected to Switch Management Network.

    Single 25GbE connection from server to each TOR switch by using the SFP28 25GbE Passive Copper Cable.

     

    Ansible control machine must have an access to switch via management networks in order to provision, operate and orchestrate Ethernet fabric.

     

    In this document we do not cover connectivity to corporate network.

    We strongly recommend to use out-of-band management for Mellanox switches - use dedicated management port on each switch.

     

    Fabric Management Logical Diagram

    We used Ansible software to provision, configure and monitor our network fabric (see below switch management logical diagram).

     

     

     

    Network Configuration

    Step 1: Install Ansible control machine

    Prerequisites

    To begin exploring Ansible as a means of managing our switches, we need to install the Ansible software on at least one physical or virtual machine.

    To proceed with the below installation process, you will need the following:

    • One Ubuntu 16.04 server with a sudo non-root user.

     

    Installing Ansible

    Ansible for Ubuntu is provided by PPA(personal package archive) repository. We can add the Ansible PPA by running the following command:

    sudo apt-add-repository ppa:ansible/ansible

    Press ENTER to accept the PPA addition.

     

    Next, we need to refresh our system's package index so that it is aware of the packages available in the PPA. Afterwards, we can install the software:

    sudo apt-get update
    sudo apt-get install ansible

    We now have all of the software required to administer our switches through Ansible.

    Ansible primarily communicates with the switches through SSH.

    Ensure that the switches are accessible from the Ansible server by openSSH. run:

    ssh admin@your_switch_ip

    Default admin user password is admin

    Step 2: Configuring Ansible Hosts

    Ansible keeps track of all of the switches that it knows about through a "hosts" file. We need to set up this file first before we can begin to communicate with our other switches.

    Open the file with root privileges like this:

    sudo vim /etc/ansible/hosts

    as you can see a hosts file that has a lot of example configurations, none of which will actually work for us since these hosts are made up.

    The hosts file is fairly flexible and can be configured in a few different ways.

    So in our scenario, we will create a myhosts inventory file that will include two switches we are going to control with Ansible.

     

    Create the myhosts file with root privileges like the below:

    sudo vim /etc/ansible/hosts

    Now we need to add the below to our switches file to accomplish this:

     

    Solution 1 (up to 18 servers):

    [mlnx_leafs]

    sn2010_leaf1_1 ansible_host=10.7.214.111

    sn2010_leaf1_2 ansible_host=10.7.214.110

     

    [mlnx_rack01]

    sn2010_leaf1_1

    sn2010_leaf1_2

     

    [mlnx_2010_leafs]

    sn2010_leaf1_1

    sn2010_leaf1_2

     

    [mlnx_switches:children]

    mlnx_leafs

    mlnx_rack01

     

    [network:children]

    mlnx_switches

     

    Solution 2 (up to 48 servers):

    [mlnx_leafs]

    sn2410_leaf1_1 ansible_host=10.7.214.111

    sn2410_leaf1_2 ansible_host=10.7.214.110

     

    [mlnx_rack01]

    sn2410_leaf1_1

    sn2410_leaf1_2

     

    [mlnx_2410_leafs]

    sn2410_leaf1_1

    sn2410_leaf1_2

     

    [mlnx_switches:children]

    mlnx_leafs

    mlnx_rack01

     

    [network:children]

    mlnx_switches

    Now we will see how hosts can be in multiple groups and groups can configure parameters for all of their members.

    Ansible will try to connect to each switch with ssh admin@your_switch_ip.

    To create a file that tells all of the Mellanox switches in the "mlnx_switches" group to connect using the admin user, we will create a directory in the Ansible configuration structure called group_vars. Within this folder, we can create YAML-formatted files for each group we want to configure:

    sudo mkdir /etc/ansible/group_vars
    sudo vim /etc/ansible/group_vars/mlnx_switches

    We can put our configuration here:

    etc/ansible/group_vars/mlnx_switches

    Make sure the YAML file starts with "---".

    sudo vim /etc/ansible/group_vars/mlnx_switches/main.yml

    ---

     

    ansible_ssh_user: admin

    ansible_ssh_pass: admin

    Save and close this file when you are finished.

    If you want to specify configuration details for every switches, regardless of group association, you can put those details in a file at /etc/ansible/group_vars/all.yml.

    Individual hosts can be configured by creating files under a directory at /etc/ansible/host_vars.

    Step 3 — Using Ansible Playbook for fabric configuration

    To configure the fabric we will work with Getting Started with Ansible for Network Automation — Ansible Documentation and to configure our switches we will create playbook and roles.

    In this deployment we will use following Mellanox ONYX network modules:

    Creating a New Leaf Role

    To create leaf1_1 and leaf1_2 roles and to create the base directory structure, we’re going to use a tool bundled with Ansible called ansible-galaxy. With it, we will create the following directory structure:

    sudo mkdir /etc/ansible/roles

    sudo cd roles

    ansible-galaxy init leaf1_1

    ansible-galaxy init leaf1_2

    The ansible-galaxy init leaf1_1 and leaf1_2 roles were created successfully.

    The above command will create the following directory structure:

    /etc/ansible/tree

    roles

    ├── leaf1_1

    │ ├── defaults

    │ │ └── main.yml

    │ ├── files

    │ ├── handlers

    │ │ └── main.yml

    │ ├── meta

    │ │ └── main.yml

    │ ├── README.md

    │ ├── tasks

    │ │ └── main.yml

    │ ├── templates

    │ ├── tests

    │ │ ├── inventory

    │ │ └── test.yml

    │ └── vars

    │ └── main.yml

    └── leaf1_2

    ├── defaults

    │ └── main.yml

    ├── files

    ├── handlers

    │ └── main.yml

    ├── meta

    │ └── main.yml

    ├── README.md

    ├── tasks

    │ └── main.yml

    ├── templates

    ├── tests

    │ ├── inventory

    │ └── test.yml

    └── vars

    └── main.yml

     

    Edit the ".yaml" tasks and vars files to match our use case switch configuration with one of the following methods:

     

    Solution 1 (up to 18 servers):

    Leaf1_1
    sudo vim /etc/ansible/roles/leaf1_1/tasks/main.yml

    Insert the following strings:

    ---

     

    # This tasks file for configuring leaf1_1 switch

    - name: configure protocols

    onyx_protocol:

    spanning_tree: disabled

    ip_routing: enabled

    lacp: disabled

    mlag: enabled

    dcb_pfc: enabled

    magp: enabled

     

     

    # IPL configuration

    - name: Disable Lag members PFC

    onyx_pfc_interface:

    aggregate:

    - name: Eth1/21

    - name: Eth1/22

    state: disabled

     

    - name: Configure Lag interface

    onyx_linkagg:

    name: Po1

    members:

    - Eth1/21

    - Eth1/22

    mode: active

     

    - name: Enable Lag interface PFC

    onyx_pfc_interface:

    name: Po1

    state: enabled

     

    - name: Configure vlan for IPL

    onyx_vlan:

    vlan_id: "{{ ipl_vlan }}"

    name: IPL

     

    - name: Create vlan interface for IPL

    onyx_interface:

    name: "Vlan {{ ipl_vlan }}"

     

    - name: Configure vlan interface ip address

    onyx_l3_interface:

    name: "Vlan {{ ipl_vlan }}"

    ipv4: 172.16.1.1/24

     

    - name: configure IPL Lag interface

    onyx_mlag_ipl:

    name: Po1

    vlan_interface: "Vlan {{ ipl_vlan }}"

    peer_address: 172.16.1.2

     

    - name: configure mlag-vip for IPL

    onyx_mlag_vip:

    ipaddress: 10.7.214.121/24

    group_name: rack01-mlag-ipl-group

    mac_address: aa:99:80:80:80:81

     

     

    # MGMT VLAN configuration NO LACP

    - name: create vlan for MGMT with servers

    onyx_vlan:

    vlan_id: "{{ onyx_leaf_vlans.mgmt.vlan_id }}"

    name: "{{ onyx_leaf_vlans.mgmt.name }}"

     

    - name: Create mgmt vlan interface for Servers

    onyx_interface:

    name: "Vlan {{ onyx_leaf_vlans.mgmt.vlan_id }}"

     

    - name: configure mgmt vlan for MAGP with servers

    onyx_l3_interface:

    name: "Vlan {{ onyx_leaf_vlans.mgmt.vlan_id }}"

    ipv4: 192.168.11.252/24

     

    - name: configure mgmt vlan interface magp

    onyx_magp:

    magp_id: 11

    interface: "Vlan {{ onyx_leaf_vlans.mgmt.vlan_id }}"

    router_ip: 192.168.11.254

    router_mac: aa:99:80:80:11:80

     

     

    # VMOTION VLAN configuration NO LACP

    - name: create vlan for VMOTION with servers

    onyx_vlan:

    vlan_id: "{{ onyx_leaf_vlans.vmotion.vlan_id }}"

    name: "{{ onyx_leaf_vlans.vmotion.name }}"

     

    - name: Create vmotion vlan interface for Servers

    onyx_interface:

    name: "Vlan {{ onyx_leaf_vlans.vmotion.vlan_id }}"

     

    - name: configure vmotion vlan for MLAG with servers

    onyx_l3_interface:

    name: "Vlan {{ onyx_leaf_vlans.vmotion.vlan_id }}"

    ipv4: 192.168.12.252/24

     

    - name: configure vmotion vlan interface magp

    onyx_magp:

    magp_id: 12

    interface: "Vlan {{ onyx_leaf_vlans.vmotion.vlan_id }}"

    router_ip: 192.168.12.254

    router_mac: aa:99:80:80:12:80

     

    # VXLAN VLAN configuration NO LACP

    - name: create vlan for VXLAN with servers

    onyx_vlan:

    vlan_id: "{{ onyx_leaf_vlans.vxlan.vlan_id }}"

    name: "{{ onyx_leaf_vlans.vxlan.name }}"

     

    - name: Create vxlan vlan interface for Servers

    onyx_interface:

    name: "Vlan {{ onyx_leaf_vlans.vxlan.vlan_id }}"

     

    - name: configure vxlan vlan for MLAG with servers

    onyx_l3_interface:

    name: "Vlan {{ onyx_leaf_vlans.vxlan.vlan_id }}"

    ipv4: 192.168.13.252/24

     

    - name: configure vxlan vlan interface magp

    onyx_magp:

    magp_id: 13

    interface: "Vlan {{ onyx_leaf_vlans.vxlan.vlan_id }}"

    router_ip: 192.168.13.254

    router_mac: aa:99:80:80:14:80

     

     

    # Ethernet ports configuration

    - name: configure ethernet interfaces

    onyx_interface:

    aggregate:

    - name: Eth1/1

    - name: Eth1/2

    - name: Eth1/3

    speed: 25G

     

    - name: configure ports vlan

    onyx_l2_interface:

    aggregate:

    - name: Eth1/1

    - name: Eth1/2

    - name: Eth1/3

    mode: trunk

    trunk_allowed_vlans: 1611-1613

     

    sudo vim /etc/ansible/roles/leaf1_1/vars/main.yml

    Insert the following strings:

    ---

     

    # vars file for leaf1_1

     

    ipl_vlan: 301

     

    # configure VLAN ID and name

    onyx_leaf_vlans:

    mgmt:

    name: mtmt

    vlan_id: 1611

    state: present

    vmotion:

    name: vmotion

    vlan_id: 1612

    state: present

    vxlan:

    name: vxlan

    vlan_id: 1613

    state: present

     

    Leaf1_2
    sudo vim /etc/ansible/roles/leaf1_1/tasks/main.yml

    Insert the following strings:

    ---

     

    # This tasks file for configuring leaf1_2 switch

    - name: configure protocols

    onyx_protocol:

    spanning_tree: disabled

    ip_routing: enabled

    lacp: disabled

    mlag: enabled

    dcb_pfc: enabled

    magp: enabled

     

     

    # IPL configuration

    - name: Disable Lag members PFC

    onyx_pfc_interface:

    aggregate:

    - name: Eth1/21

    - name: Eth1/22

    state: disabled

     

    - name: Configure Lag interface

    onyx_linkagg:

    name: Po1

    members:

    - Eth1/21

    - Eth1/22

    mode: active

     

    - name: Enable Lag interface PFC

    onyx_pfc_interface:

    name: Po1

    state: enabled

     

    - name: Configure vlan for IPL

    onyx_vlan:

    vlan_id: "{{ ipl_vlan }}"

    name: IPL

     

    - name: Create vlan interface for IPL

    onyx_interface:

    name: "Vlan {{ ipl_vlan }}"

     

    - name: Configure vlan interface ip address

    onyx_l3_interface:

    name: "Vlan {{ ipl_vlan }}"

    ipv4: 172.16.1.2/24

     

    - name: configure IPL Lag interface

    onyx_mlag_ipl:

    name: Po1

    vlan_interface: "Vlan {{ ipl_vlan }}"

    peer_address: 172.16.1.1

     

    - name: configure mlag-vip for IPL

    onyx_mlag_vip:

    ipaddress: 10.7.214.121/24

    group_name: rack01-mlag-ipl-group

    mac_address: aa:99:80:80:80:82

     

     

    # MGMT VLAN configuration NO LACP

    - name: create vlan for MGMT with servers

    onyx_vlan:

    vlan_id: "{{ onyx_leaf_vlans.mgmt.vlan_id }}"

    name: "{{ onyx_leaf_vlans.mgmt.name }}"

     

    - name: Create mgmt vlan interface for Servers

    onyx_interface:

    name: "Vlan {{ onyx_leaf_vlans.mgmt.vlan_id }}"

     

    - name: configure mgmt vlan for MAGP with servers

    onyx_l3_interface:

    name: "Vlan {{ onyx_leaf_vlans.mgmt.vlan_id }}"

    ipv4: 192.168.11.253/24

     

    - name: configure mgmt vlan interface magp

    onyx_magp:

    magp_id: 11

    interface: "Vlan {{ onyx_leaf_vlans.mgmt.vlan_id }}"

    router_ip: 192.168.11.254

    router_mac: aa:99:80:80:11:80

     

     

    # VMOTION VLAN configuration NO LACP

    - name: create vlan for VMOTION with servers

    onyx_vlan:

    vlan_id: "{{ onyx_leaf_vlans.vmotion.vlan_id }}"

    name: "{{ onyx_leaf_vlans.vmotion.name }}"

     

    - name: Create vmotion vlan interface for Servers

    onyx_interface:

    name: "Vlan {{ onyx_leaf_vlans.vmotion.vlan_id }}"

     

    - name: configure vmotion vlan for MLAG with servers

    onyx_l3_interface:

    name: "Vlan {{ onyx_leaf_vlans.vmotion.vlan_id }}"

    ipv4: 192.168.12.253/24

     

    - name: configure vmotion vlan interface magp

    onyx_magp:

    magp_id: 12

    interface: "Vlan {{ onyx_leaf_vlans.vmotion.vlan_id }}"

    router_ip: 192.168.12.254

    router_mac: aa:99:80:80:12:80

     

     

    # VXLAN VLAN configuration NO LACP

    - name: create vlan for VXLAN with servers

    onyx_vlan:

    vlan_id: "{{ onyx_leaf_vlans.vxlan.vlan_id }}"

    name: "{{ onyx_leaf_vlans.vxlan.name }}"

     

    - name: Create vxlan vlan interface for Servers

    onyx_interface:

    name: "Vlan {{ onyx_leaf_vlans.vxlan.vlan_id }}"

     

    - name: configure vxlan vlan for MLAG with servers

    onyx_l3_interface:

    name: "Vlan {{ onyx_leaf_vlans.vxlan.vlan_id }}"

    ipv4: 192.168.13.253/24

     

    - name: configure vxlan vlan interface magp

    onyx_magp:

    magp_id: 13

    interface: "Vlan {{ onyx_leaf_vlans.vxlan.vlan_id }}"

    router_ip: 192.168.13.254

    router_mac: aa:99:80:80:14:80

     

     

    # Ethernet ports configuration

    - name: configure ethernet interfaces

    onyx_interface:

    aggregate:

    - name: Eth1/1

    - name: Eth1/2

    - name: Eth1/3

    speed: 25G

     

    - name: configure ports vlan

    onyx_l2_interface:

    aggregate:

    - name: Eth1/1

    - name: Eth1/2

    - name: Eth1/3

    mode: trunk

    trunk_allowed_vlans: 1611-1613

     

    sudo vim /etc/ansible/roles/leaf1_2/vars/main.yml

    Insert the following strings:

    ---

     

    # vars file for leaf1_2

     

    ipl_vlan: 301

     

    # configure VLAN ID and name

    onyx_leaf_vlans:

    mgmt:

    name: mtmt

    vlan_id: 1611

    state: present

    vmotion:

    name: vmotion

    vlan_id: 1612

    state: present

    vxlan:

    name: vxlan

    vlan_id: 1613

    state: present

     

     

    Solution 2 (up to 48 servers):

    Leaf1_1
    sudo vim /etc/ansible/roles/leaf1_1/tasks/main.yml

    Insert the following strings:

    ---

     

    # This tasks file for configuring leaf1_1 switch

    - name: configure protocols

    onyx_protocol:

    spanning_tree: disabled

    ip_routing: enabled

    lacp: disabled

    mlag: enabled

    dcb_pfc: enabled

    magp: enabled

     

     

    # IPL configuration

    - name: Disable Lag members PFC

    onyx_pfc_interface:

    aggregate:

    - name: Eth1/55

    - name: Eth1/56

    state: disabled

     

    - name: Configure Lag interface

    onyx_linkagg:

    name: Po1

    members:

    - Eth1/55

    - Eth1/56

    mode: active

     

    - name: Enable Lag interface PFC

    onyx_pfc_interface:

    name: Po1

    state: enabled

     

    - name: Configure vlan for IPL

    onyx_vlan:

    vlan_id: "{{ ipl_vlan }}"

    name: IPL

     

    - name: Create vlan interface for IPL

    onyx_interface:

    name: "Vlan {{ ipl_vlan }}"

     

    - name: Configure vlan interface ip address

    onyx_l3_interface:

    name: "Vlan {{ ipl_vlan }}"

    ipv4: 172.16.1.1/24

     

    - name: configure IPL Lag interface

    onyx_mlag_ipl:

    name: Po1

    vlan_interface: "Vlan {{ ipl_vlan }}"

    peer_address: 172.16.1.2

     

    - name: configure mlag-vip for IPL

    onyx_mlag_vip:

    ipaddress: 10.7.214.121/24

    group_name: rack01-mlag-ipl-group

    mac_address: aa:99:80:80:80:81

     

     

    # MGMT VLAN configuration NO LACP

    - name: create vlan for MGMT with servers

    onyx_vlan:

    vlan_id: "{{ onyx_leaf_vlans.mgmt.vlan_id }}"

    name: "{{ onyx_leaf_vlans.mgmt.name }}"

     

    - name: Create mgmt vlan interface for Servers

    onyx_interface:

    name: "Vlan {{ onyx_leaf_vlans.mgmt.vlan_id }}"

     

    - name: configure mgmt vlan for MAGP with servers

    onyx_l3_interface:

    name: "Vlan {{ onyx_leaf_vlans.mgmt.vlan_id }}"

    ipv4: 192.168.11.252/24

     

    - name: configure mgmt vlan interface magp

    onyx_magp:

    magp_id: 11

    interface: "Vlan {{ onyx_leaf_vlans.mgmt.vlan_id }}"

    router_ip: 192.168.11.254

    router_mac: aa:99:80:80:11:80

     

     

    # VMOTION VLAN configuration NO LACP

    - name: create vlan for VMOTION with servers

    onyx_vlan:

    vlan_id: "{{ onyx_leaf_vlans.vmotion.vlan_id }}"

    name: "{{ onyx_leaf_vlans.vmotion.name }}"

     

    - name: Create vmotion vlan interface for Servers

    onyx_interface:

    name: "Vlan {{ onyx_leaf_vlans.vmotion.vlan_id }}"

     

    - name: configure vmotion vlan for MLAG with servers

    onyx_l3_interface:

    name: "Vlan {{ onyx_leaf_vlans.vmotion.vlan_id }}"

    ipv4: 192.168.12.252/24

     

    - name: configure vmotion vlan interface magp

    onyx_magp:

    magp_id: 12

    interface: "Vlan {{ onyx_leaf_vlans.vmotion.vlan_id }}"

    router_ip: 192.168.12.254

    router_mac: aa:99:80:80:12:80

     

    # VXLAN VLAN configuration NO LACP

    - name: create vlan for VXLAN with servers

    onyx_vlan:

    vlan_id: "{{ onyx_leaf_vlans.vxlan.vlan_id }}"

    name: "{{ onyx_leaf_vlans.vxlan.name }}"

     

    - name: Create vxlan vlan interface for Servers

    onyx_interface:

    name: "Vlan {{ onyx_leaf_vlans.vxlan.vlan_id }}"

     

    - name: configure vxlan vlan for MLAG with servers

    onyx_l3_interface:

    name: "Vlan {{ onyx_leaf_vlans.vxlan.vlan_id }}"

    ipv4: 192.168.13.252/24

     

    - name: configure vxlan vlan interface magp

    onyx_magp:

    magp_id: 13

    interface: "Vlan {{ onyx_leaf_vlans.vxlan.vlan_id }}"

    router_ip: 192.168.13.254

    router_mac: aa:99:80:80:14:80

     

     

    # Ethernet ports configuration

    - name: configure ethernet interfaces

    onyx_interface:

    aggregate:

    - name: Eth1/1

    - name: Eth1/2

    - name: Eth1/3

    speed: 25G

     

    - name: configure ports vlan

    onyx_l2_interface:

    aggregate:

    - name: Eth1/1

    - name: Eth1/2

    - name: Eth1/3

    mode: trunk

    trunk_allowed_vlans: 1611-1613

     

    sudo vim /etc/ansible/roles/leaf1_1/vars/main.yml

    Insert the following strings:

    ---

     

    # vars file for leaf1_1

     

    ipl_vlan: 301

     

    # configure VLAN ID and name

    onyx_leaf_vlans:

    mgmt:

    name: mtmt

    vlan_id: 1611

    state: present

    vmotion:

    name: vmotion

    vlan_id: 1612

    state: present

    vxlan:

    name: vxlan

    vlan_id: 1613

    state: present

     

    Leaf1_2
    sudo vim /etc/ansible/roles/leaf1_1/tasks/main.yml

    Insert the following strings:

    ---

     

    # This tasks file for configuring leaf1_2 switch

    - name: configure protocols

    onyx_protocol:

    spanning_tree: disabled

    ip_routing: enabled

    lacp: disabled

    mlag: enabled

    dcb_pfc: enabled

    magp: enabled

     

     

    # IPL configuration

    - name: Disable Lag members PFC

    onyx_pfc_interface:

    aggregate:

    - name: Eth1/55

    - name: Eth1/56

    state: disabled

     

    - name: Configure Lag interface

    onyx_linkagg:

    name: Po1

    members:

    - Eth1/55

    - Eth1/56

    mode: active

     

    - name: Enable Lag interface PFC

    onyx_pfc_interface:

    name: Po1

    state: enabled

     

    - name: Configure vlan for IPL

    onyx_vlan:

    vlan_id: "{{ ipl_vlan }}"

    name: IPL

     

    - name: Create vlan interface for IPL

    onyx_interface:

    name: "Vlan {{ ipl_vlan }}"

     

    - name: Configure vlan interface ip address

    onyx_l3_interface:

    name: "Vlan {{ ipl_vlan }}"

    ipv4: 172.16.1.2/24

     

    - name: configure IPL Lag interface

    onyx_mlag_ipl:

    name: Po1

    vlan_interface: "Vlan {{ ipl_vlan }}"

    peer_address: 172.16.1.1

     

    - name: configure mlag-vip for IPL

    onyx_mlag_vip:

    ipaddress: 10.7.214.121/24

    group_name: rack01-mlag-ipl-group

    mac_address: aa:99:80:80:80:82

     

     

    # MGMT VLAN configuration NO LACP

    - name: create vlan for MGMT with servers

    onyx_vlan:

    vlan_id: "{{ onyx_leaf_vlans.mgmt.vlan_id }}"

    name: "{{ onyx_leaf_vlans.mgmt.name }}"

     

    - name: Create mgmt vlan interface for Servers

    onyx_interface:

    name: "Vlan {{ onyx_leaf_vlans.mgmt.vlan_id }}"

     

    - name: configure mgmt vlan for MAGP with servers

    onyx_l3_interface:

    name: "Vlan {{ onyx_leaf_vlans.mgmt.vlan_id }}"

    ipv4: 192.168.11.253/24

     

    - name: configure mgmt vlan interface magp

    onyx_magp:

    magp_id: 11

    interface: "Vlan {{ onyx_leaf_vlans.mgmt.vlan_id }}"

    router_ip: 192.168.11.254

    router_mac: aa:99:80:80:11:80

     

     

    # VMOTION VLAN configuration NO LACP

    - name: create vlan for VMOTION with servers

    onyx_vlan:

    vlan_id: "{{ onyx_leaf_vlans.vmotion.vlan_id }}"

    name: "{{ onyx_leaf_vlans.vmotion.name }}"

     

    - name: Create vmotion vlan interface for Servers

    onyx_interface:

    name: "Vlan {{ onyx_leaf_vlans.vmotion.vlan_id }}"

     

    - name: configure vmotion vlan for MLAG with servers

    onyx_l3_interface:

    name: "Vlan {{ onyx_leaf_vlans.vmotion.vlan_id }}"

    ipv4: 192.168.12.253/24

     

    - name: configure vmotion vlan interface magp

    onyx_magp:

    magp_id: 12

    interface: "Vlan {{ onyx_leaf_vlans.vmotion.vlan_id }}"

    router_ip: 192.168.12.254

    router_mac: aa:99:80:80:12:80

     

     

    # VXLAN VLAN configuration NO LACP

    - name: create vlan for VXLAN with servers

    onyx_vlan:

    vlan_id: "{{ onyx_leaf_vlans.vxlan.vlan_id }}"

    name: "{{ onyx_leaf_vlans.vxlan.name }}"

     

    - name: Create vxlan vlan interface for Servers

    onyx_interface:

    name: "Vlan {{ onyx_leaf_vlans.vxlan.vlan_id }}"

     

    - name: configure vxlan vlan for MLAG with servers

    onyx_l3_interface:

    name: "Vlan {{ onyx_leaf_vlans.vxlan.vlan_id }}"

    ipv4: 192.168.13.253/24

     

    - name: configure vxlan vlan interface magp

    onyx_magp:

    magp_id: 13

    interface: "Vlan {{ onyx_leaf_vlans.vxlan.vlan_id }}"

    router_ip: 192.168.13.254

    router_mac: aa:99:80:80:14:80

     

     

    # Ethernet ports configuration

    - name: configure ethernet interfaces

    onyx_interface:

    aggregate:

    - name: Eth1/1

    - name: Eth1/2

    - name: Eth1/3

    speed: 25G

     

    - name: configure ports vlan

    onyx_l2_interface:

    aggregate:

    - name: Eth1/1

    - name: Eth1/2

    - name: Eth1/3

    mode: trunk

    trunk_allowed_vlans: 1611-1613

     

    sudo vim /etc/ansible/roles/leaf1_2/vars/main.yml

    Insert the following strings:

    ---

     

    # vars file for leaf1_2

     

    ipl_vlan: 301

     

    # configure VLAN ID and name

    onyx_leaf_vlans:

    mgmt:

    name: mtmt

    vlan_id: 1611

    state: present

    vmotion:

    name: vmotion

    vlan_id: 1612

    state: present

    vxlan:

    name: vxlan

    vlan_id: 1613

    state: present

     

    Creating a Playbook

    To perform some tasks on defined switches we need to create a playbook.

    To do that, create a new file with a descriptive playbook name that uses the .yml file extension and follow one of the below methods:

     

    Solution 1 (up to 18 servers):

    sudo vi /etc/ansible/rack_network_2tors_sn2010_install.yml

    Insert the following strings:

    ---

     

    # configure one rack network with Mellanox SN2010 and ONYX

    - name: configure leaf1_1

    hosts: sn2010_leaf1_1

    gather_facts: no

    connection: network_cli

    become: yes

    become_method: enable

    vars:

    ansible_network_os: onyx

    roles:

    - leaf1_1

     

    - name: configure leaf1_2

    hosts: sn2010_leaf1_2

    gather_facts: no

    connection: network_cli

    become: yes

    become_method: enable

    vars:

    ansible_network_os: onyx

    roles:

    - leaf1_2

     

    Solution 2 (up to 48 servers):

    sudo vi /etc/ansible/rack_network_2tors_sn2410_install.yml

    Insert following strings:

    ---

     

    # configure one rack network with Mellanox SN2410 and ONYX

    - name: configure leaf1_1

    hosts: sn2410_leaf1_1

    gather_facts: no

    connection: network_cli

    become: yes

    become_method: enable

    vars:

    ansible_network_os: onyx

    roles:

    - leaf1_1

     

    - name: configure leaf1_2

    hosts: sn2410_leaf1_2

    gather_facts: no

    connection: network_cli

    become: yes

    become_method: enable

    vars:

    ansible_network_os: onyx

    roles:

    - leaf1_2

     

    Step 4 — Execute Ansible Playbook for fabric configuration

    To execute the playbook, execute the ansible-playbook command on the control machine and provide the playbook path and any desired options.

    ansible-playbook playbook.yml

    For more options, run:

    ansible-playbook --help

     

    Solution 1 (up to 18 servers):

    ansible-playbook rack_network_2tors_sn2010_install.yml

     

    Solution 2 (up to 48 servers):

    ansible-playbook rack_network_2tors_sn2410_install.yml

     

    Step 5 — Check switch configuration

     

    Connect to the switches by SSH and check the running configuration:

    ssh admin@your_switch_ip

    Mellanox Onyx Switch Management

    Last login: Tue Nov 2 15:21:34 2010 from

     

    Mellanox Switch

     

    swx-vwd-11 [rack01-mlag-ipl-group: master] > ena

    swx-vwd-11 [rack01-mlag-ipl-group: master] # conf t

    swx-vwd-11 [rack01-mlag-ipl-group: master] (config) # show running-config

    ##

    ## Running database "initial"

    ## Generated at 2010/11/06 07:31:22 +0000

    ## Hostname: sn2410_leaf1_1

    ##

     

    ##

    ## Running-config temporary prefix mode setting

    ##

    no cli default prefix-modes enable

     

    ##

    ## MLAG protocol

    ##

    protocol mlag

     

    ##

    ## Interface Ethernet configuration

    ##

    interface port-channel 1

    interface ethernet 1/1 switchport mode trunk

    interface ethernet 1/2 switchport mode trunk

    interface ethernet 1/3 switchport mode trunk

    interface ethernet 1/55-1/56 channel-group 1 mode active

     

    ##

    ## LAG configuration

    ##

    lacp

     

    ##

    ## VLAN configuration

    ##

    vlan 301

    vlan 1611-1613

    interface ethernet 1/1 switchport trunk allowed-vlan none

    interface ethernet 1/2 switchport trunk allowed-vlan none

    interface ethernet 1/3 switchport trunk allowed-vlan none

    vlan 301 name "IPL"

    vlan 1611 name "mtmt"

    vlan 1612 name "vmotion"

    vlan 1613 name "vxlan"

    interface ethernet 1/1 switchport trunk allowed-vlan add 1

    interface ethernet 1/1 switchport trunk allowed-vlan add 1611-1613

    interface ethernet 1/2 switchport trunk allowed-vlan add 1

    interface ethernet 1/2 switchport trunk allowed-vlan add 1611-1613

    interface ethernet 1/3 switchport trunk allowed-vlan add 1

    interface ethernet 1/3 switchport trunk allowed-vlan add 1611-1613

     

    ##

    ## STP configuration

    ##

    no spanning-tree

     

    ##

    ## L3 configuration

    ##

    ip routing vrf default

    interface vlan 301

    interface vlan 1611

    interface vlan 1612

    interface vlan 1613

    interface vlan 301 ip address 172.16.1.1/24 primary

    interface vlan 1611 ip address 192.168.11.252/24 primary

    interface vlan 1612 ip address 192.168.12.252/24 primary

    interface vlan 1613 ip address 192.168.13.252/24 primary

     

    ##

    ## DCBX PFC configuration

    ##

    dcb priority-flow-control enable force

    interface port-channel 1 dcb priority-flow-control mode on force

     

    ##

    ## MAGP configuration

    ##

    protocol magp

    interface vlan 1611 magp 11

    interface vlan 1612 magp 12

    interface vlan 1613 magp 13

    interface vlan 1611 magp 11 ip virtual-router address 192.168.11.254

    interface vlan 1612 magp 12 ip virtual-router address 192.168.12.254

    interface vlan 1613 magp 13 ip virtual-router address 192.168.13.254

    interface vlan 1611 magp 11 ip virtual-router mac-address AA:99:80:80:11:80

    interface vlan 1612 magp 12 ip virtual-router mac-address AA:99:80:80:12:80

    interface vlan 1613 magp 13 ip virtual-router mac-address AA:99:80:80:13:80

     

    ##

    ## MLAG configurations

    ##

    mlag-vip rack01-mlag-ipl-group ip 10.7.214.121 /24 force

    no mlag shutdown

    mlag system-mac AA:99:80:80:80:81

    interface port-channel 1 ipl 1

    interface vlan 301 ipl 1 peer-address 172.16.1.2

     

    ##

    ## AAA remote server configuration

    ##

    # ldap bind-password ********

    # radius-server key ********

    # tacacs-server key ********

     

    ##

    ## Network management configuration

    ##

    # web proxy auth basic password ********

     

    ##

    ## X.509 certificates configuration

    ##

    #

    # Certificate name system-self-signed, ID 01b66d6cddb1c27277ef896a25af54d359648436

    # (public-cert config omitted since private-key config is hidden)

     

    ##

    ## Persistent prefix mode setting

    ##

    cli default prefix-modes enable

     

    Done!