HowTo Configure LNV VXLAN on Cumulus Linux

Version 7

    Lightweight Network Virtualization (LNV) enables the deployment of a VXLAN without the need for a central controller on a bare metal switch.

    LNV runs the VXLAN service and registration daemons on Cumulus Linux without any additional external controller or software suite.

    Establishing a data path between bridge entities on top of a layer 3 fabric is done by coupling a service node with traditional MAC address learning.

     

    References

     

    Terms and Definitions

    Term

    Definition

    vxrd

    VXLAN registration daemon (vxrd) runs on the switch which maps the VLANs to VXLANs.

    The vxrd daemon needs to be configured to register to a service node which turns the switch into a VTEP.

    VTEP

    Virtual tunnel endpoint (VTEP) is an encapsulation and decapsulation point for VXLANs.

    Leaf

    Edge switch which connects to hosts in south and north spines.

    Spine

    The aggregation switch for multiple leafs.

    vxrd Service node

    The switch running vxsnd.

    May also be referred to as VXLAN service node.

    Anycast IP

    An IP address which can be advertised from multiple locations allowing multiple devices to share the same IP and effectively allocate load balance traffic.
    Anycast is used in two cases in LNV:
    1. sharing VTEP IP address between a pair of MLAG switches. 
    2. loading balance traffic for service nodes (e.g. service nodes share an IP address).

     

    Setup

    The following figure illustrates the topology used in this guide.
    This topology includes three Spectrum switches (Spine, Leaf-1 and Leaf-2) running Cumulus Linux.

    Each leaf switch has a VLAN configured in the south (VLAN10), and L3 interfaces on the Spine.
    Creating a logical VXLAN interface on both leaf switches enables the switches to operate as VTEPs.
    The IP address associated with this VTEP is normally configured with its loopback address, in the image below, the loopback address is 1.1.1.1 for Leaf-1 and 3.3.3.3 for Leaf-2.

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    In order to make a VXLAN tunnel and forward BUM (Broadcast, Unknown-unicast, and Multicast) packets to members of a VXLAN, the service node (Vxrd Service Node, Spine as mentioned in Figure1) needs to acquire the addresses of all the VTEPs for every VXLAN it makes.
    The service node daemon on the spine does this through a registration daemon (vxrd) running on each leaf switch that contains a VTEP participating in LNV. The registration process informs the service node (on the spine) of all the VTEPs.

    With LNV, the process of MAC learning is same as a traditional L2 switch. When a host sends a packet out, the MAC is learned at ingress and the MAC table is build. If the host never sent a packet and the packet's MAC does not exists in table, the service node will replicate the packet to all VTEPs (similar to flooding behavior when MAC is unknown).

    The replication of the packet (sent to all VTEPS) can happen on one of the following:

    • Spine (Service node replication)
    • Leaf (Head end replication)

    Only one type of replication can be used at a time (to avoid duplicate packets).
    Head end replication is recommended and is on by default.

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    The Following configuration maps VLAN 10 to VXLAN 2000.
    The VTEPs are on the Leaf-1 and Leaf-2 (the loopback address will be used as VTEPs).
    The spine is the service node for VXLAN 2000.

    Configuration

    Spine

    #configure Interfaces

    net add loopback lo ip address 2.2.2.2/32

    net add loopback swp2 ip address 12.12.12.2/24

    net add loopback swp3 ip address 23.23.23.2/24

     

    #configure routing

    net add ospf router-id 2.2.2.2

    net add router ospf

    net add interface lo ip ospf area 0

    net add interface swp2 ip ospf area 0

    net add interface swp3 ip ospf area 0

    net add interface swp2 ip ospf network broadcast

    net add interface swp3 ip ospf network broadcast

    net add interface lo ip ospf network broadcast

     

    #configure Service Node functionality, along with Anycast IP

    net add lnv service-node source 2.2.2.2

    net add lnv service-node anycast-ip 2.2.2.2

    The example in this document only shows one spine.
    In case of multiple spines, the anycast IP is used to balance the load of service replication.
    Additional configuration is needed in case of three spines (Spine – 2 has service IP 5.5.5.5 and Spine – 3 has service IP 6.6.6.6):

    net add lnv service-node peers 2.2.2.2 5.5.5.5 6.6.6.6

     

    Leaf-1

    #configure Interfaces

    net add loopback lo ip address 1.1.1.1/32

    net add loopback swp2 ip address 12.12.12.1/24

     

    #configure routing

    net add ospf router-id 1.1.1.1

    net add router ospf

    net add interface lo ip ospf area 0

    net add interface swp2 ip ospf area 0

    net add interface swp2 ip ospf network broadcast

    net add interface lo ip ospf network broadcast

     

    #configure VTEP and Service Node IP

    net add loopback lo vxrd-src-ip 1.1.1.1

    net add loopback lo vxrd-svcnode-ip 2.2.2.2

     

    #host VLAN configs

    net add vlan 10

    net add interface swp16 bridge-access 10

    net add vlan-interface vlan10 address 10.10.10.1/24

     

    #VLAN to VxLAN Mapping

    net add vni vni2000 bridge-access 10

    net add vni vni2000 vxlan-local-tunnelip 1.1.1.1

     

    Leaf-2

    #configure Interfaces

    net add loopback lo ip address 3.3.3.3/32

    net add loopback swp3 ip address 23.23.23.3/24

     

    #configure routing

    net add interface lo ip ospf area 0

    net add interface swp3 ip ospf area 0

    net add interface swp3 ip ospf network broadcast

    net add interface lo ip ospf network broadcast

    net add ospf router-id 3.3.3.3

    net add router ospf

     

    #configure VTEP and Service Node IP

    net add loopback lo vxrd-src-ip 3.3.3.3

    net add loopback lo vxrd-svcnode-ip 2.2.2.2

     

    #host VLAN configs

    net add vlan 10

    net add interface swp16 bridge-access 10

    net add vlan-interface vlan10 address 10.10.10.3/24

     

    #VLAN to VxLAN Mapping

    net add vni vni2000 bridge-access 10

    net add vni vni2000 vxlan-local-tunnelip 3.3.3.3

     

    Verification

    Verification can be done with show commands. Notice the roles are different for Leaf and Spine switches (Leaf is VTEP and Spine is a Service node). Also notice that a leaf has learned the remote VTEPs, along with the local membership of mapped VLANs.
    Once the information is inserted, the ping test can be done from host-1 on leaf-1 to host-2 on leaf-2:

    Spine

    Leaf-1

    Leaf-2

     

    End to End Ping

    Host – 1 (10.10.10.11) to Host – 2 (10.10.10.33)

     

    Host – 2 (10.10.10.33) to Host – 1 (10.10.10.11)

     

    Learned MACs

    On Leaf – 1: Host – 2 MAC learned on VNI 2000 with remote VTEP as 3.3.3.3

     

    On Leaf – 2: Host – 1 MAC learned on VNI 2000 with remote VTEP as 1.1.1.1