Networking Your Nutanix Enterprise Cloud to Scale

Version 5

    This post discusses the fat tree architecture with Mellanox networking to build efficient infrastructures for Nutanix hyper-scale solutions.

     

     

    References

     

    Overview

    Your enterprise cloud on the hyper-converged platform is built to scale as you grow your business. As you add more customers and new services, your enterprise cloud needs to meet your business needs for both today and the future. Can your current network infrastructure scale efficiently to accommodate future business needs? Keep in mind that it is always more expensive to make changes when you have a fully operational network already in place.

     

    Hyper-Converged Data Center Design

    There is a good chance that your current network is built on a Three-Tiered Architecture, because it is fairly simple to expand your network when applications are running on dedicated physical servers.

    i1.png

    The three-tiered architecture consists of the access layer (using access or “leaf” switches where servers are connected), the aggregation layer where the access switches are connected upstream, and the core layer that connects everything. When more servers are connected to the access layer, you add access switches to physically expand the switch ports at L2 if needed. This process is straightforward – all you need to do is to calculate the switch ports required and check the rate of over-subscription to the upstream network in order to provide sufficient bandwidth.

     

    Much of the data in this framework is processed and remains in the dedicated domain (L2 segment). When a service in one physical domain needs to reach another domain, then the traffic often flows north-south. For example, the request from the webserver goes upstream to the aggregation and core layers and then travels down to the database server in another physical L2 segment. The response data traverses three layers in the same fashion. But this network topology cannot handle the scalability and performance of a hyper-converged infrastructure at modern data centers.

     

    With hyper-converged infrastructure, a cluster of x86 servers are “glued” by a software control plane to form unified compute and storage pools. All applications are virtualized to run on a virtual machine (VM), or a container, and distributed (and migrated) across the cluster on policy-based automation. Application I/Os are managed at the VM level, but physical data is distributed across the cluster in a single storage pool.

     

    In these cases, the three-tiered architecture approach often reaches its limit and breaks down.

    i2.png

    For the traffic switched within the L2 segment, the commonly-used spanning-tree protocol (STP) takes its toll because disabling redundant links to cut the loop results in severe link capability under-utilization. Adding link capacity to accommodate the east-west traffic is quite expensive and is saddled with low efficiency.

     

    For a large cluster that spans multiple racks and L2 segments, the traffic must travel through the aggregation and core layers, which results in increased latency. When a large amount of upstream traffic leads to higher rate of oversubscription from the access layer to the aggregation and core layers, which will inevitably cause congestion, degraded and unpredictable performance.

     

    For storage I/Os, degradation and unpredictable performance presents the worst case possible scenario.

    Because of these architectural shortcomings, modern data centers are adopting the leaf-spine architecture instead. Constructed in two leaf (access) and spine layers, the leaf-spine architecture has a simple topology  ̶  every leaf switch is directly connected to every spine switch.

     

    In this topology, any pair of end points communicates with each other in a single hop, as this ensures consistent and predictable latency. By using OSPF or BGP with ECMP, your network utilizes all available links, and achieves maximal link capacity utilization. Furthermore, adding more links between each leaf and its spine can provide additional bandwidth between leaf switches.

     

    In addition, the use of overlay technologies such as VXLAN can increase efficiency. As a result, the leaf-spine architecture also delivers optimal and predictable network performance for hyper-converged infrastructure.

     

    The leaf-spine architecture provides maximal link capacity utilization, optimal and predictable performance, and the best scalability possible to accommodate dynamic, agile data movement between nodes on hyper-converged infrastructure. Leaf-spine networks constructed with Mellanox Spectrum™ switches provide line-rate resilient network performance and enable a high-density, scalable rack design.

     

    Using Mellanox Spectrum Switches in the Data Center

     

    Mellanox Spectrum switches deliver non-blocking line-rate performance at link speeds from 10Gb/s to 100Gb/s at any frame sizes. In particular, the 16-port SN2100 Spectrum switch offers you the most versatile TOR switch in a half-width, 1RU form factor.

     

    The 16 ports on SN2100 can run speeds at 10, 25, 40, 50, and 100Gb/s. When more switch ports are needed, you can expand a single physical port into four 10 or 25Gb/s ports using breakout cables. Therefore, SN2100 can be configured as 16-port 10G 25Gb/s switch or 48-port 10/25Gb/s switch with four 40/100Gb/s ports for uplinks.

     

    The half-width form factor of SN2100 allows you to install two of them side-by-side in a 1RU space on the rack, and run an MLAG (Multilink Aggregation Groups) between them to create a highly available L2 fabric. Configuring link aggregation between physical switch ports and hyper-converged appliances utilizes all physical network connections to load balance VMs actively  ̶  a key advantage particularly in all-flash clusters.

     

    It is also worth pointing out that 100Gb/s uplinks available on Spectrum switches offer more link capacity between leaf and spine switches, which is very useful with all-flash-based platforms.

     

     

    Nutanix Network Designs

    Nutanix's recently-published solution paper shows the recommended hyper-converged network design for data centers. As the leading enterprise cloud solution provider, Nutanix sees more and more customers migrate their data centers to Nutanix hyper-converged platforms from SMBs with a half-rack deployment to large enterprise customers whose cloud across multiple racks. Customers are consolidating and assigning more intensive workloads to their clouds and starting to use faster flash storage solutions.

     

    If you are architecting your network for a Nutanix Enterprise cloud, the Nutanix paper presents solutions that can help you achieve scalable designs and density with Mellanox networking.

     

    As shown below, Spectrum SN2100 switches fit in nicely with the right port count for half-rack deployment of 4-12 nodes, typical of SMBs. As the data center grows and more server nodes are added, the same SN2100 switches can also support a full-rack deployment of up to 24 nodes. For large enterprise cloud environments consisting of multiple racks, Mellanox Spectrum switches can scale easily in a spine-leaf topology efficiently.

     

     

    Refer toRack Solution Using SN2100 MLAG Switch Pair and ConnectX-4 Lx for more technical details, and see www.mellanox.com/Ethernet.