Introduction to LinkX DAC Cables

Version 24

    Some things you just don’t go cheap on. Parachutes, eye surgeons, and DAC cables!

    This post discusses Direct Attach Copper (DAC) cabling for inside data center rack applications to connect Ethernet and InfiniBand top-of-rack switches to network adapters. DAC cables are also known as passive copper cables (PCC).

    Topics covered are:

    • What is the DAC technology? When should it be used? Why has it become so popular?
    • What are the features, benefits, applications and various technical issues?
    • When selecting DAC to your data center, why Mellanox?

     

     

    Overview

     

    What is DAC?

    CAT-5e cabling and 1GBASE-T have dominated the data center interconnect scene for 15 plus years. However, the transition to 10G Ethernet proved to be a significant hindrance in both power consumption and cost. That’s when Direct Attach Copper (DAC) cabling, a.k.a. Twinax, snuck in and grabbed significant market share. Now, it has become the preferred interconnect for inside server racks, especially for high-speed links at 25G, 50G and 100G in just about all applications in hyperscale, enterprise, storage, and in many high performance computing (HPC) installations.

     

    D1.png

     

    DAC forms a direct electrical connection hence the name, Direct Attach Copper cabling.  A DAC is simply two wires where the 1/0 electrical logical signal is the voltage difference between two wires. A wire pair is used to create one directional lane; so two pairs create a single-channel, bi-directional interconnect. Similarly, eight wire pairs for four-channels. Wrap it all up in multiple layers of shielding foil, and solder the wires onto a tiny PCB with an EEPROM chip that contains identity data about the protocol, data rate, cable length, etc. Then, put it all in an industry standard plug shell such as SFP or QSFP to create the complete cable with connector ends. While there isn’t much inside DAC cables, there is a lot of design engineering and manufacturing technology that goes into them.

     

    D2.png

     

     

    At high signal rates, the wires act like radio antennas. This means the longer the reach and higher the data rate, resulting in requiring more EMI shielding and the cable becomes thicker, and more difficult to bend. IEEE and IBTA set the cable standard specifications for Ethernet and InfiniBand applications.  The standard for 10Gb/s signaling supports reaches of 7 meters; the maximum reach for 25Gb/s DACs is usually 3 meters - enough to span up and down server racks.

     

    DAC Advantages

     

    Low Price

    The popularity of DAC can be summed up in two words: low price. Copper cabling is the least expensive way to interconnect high speed systems together. It’s hard to beat the cost of a copper wire, a solder ball and tiny PCB all built on automated machines. More complex technologies such as active optical cables (AOC) have longer reach, but cost much more than DACs. GaAs VCSEL lasers, SiGe control ICs, InP lasers or Silicon Photonics used in AOCs all require sub-micron alignment tolerances, manual labor and a vast assortment of parts to assemble.

     

    Zero Power

    Besides low price, the other big reason for their enduring popularity is that DAC consumes almost zero power. Several studies show that 1 Watt saved at the component level (e.g. chip or cable) translates to between 3-to-5 Watts at the facility level. The Wattage multiplies when factoring in the facility with all the power distribution losses from 100 KV street lines down to 3 Volts and adding in the cooling fans in every one of 54-72 servers in a single rack chassis, plus all of the intermediate fans on the way to the rooftop A/C –just to power and cool that 1Watt extra. E.g. a 100Gb/s AOC consumes 4.6 Watts, while DAC is zero!

     

    Now, multiply this savings by 100,000 cables and a few dollars saved on each cable on the capital  expenditure, or (CapEx) and power consumption operating expenditure (OpEx) and the costs adds up fast! Large data centers spend upwards of $4 million per month on electric bills!

     

    These low-cost, low-power consumption and high-performance capabilities has led DAC-in-a-Rack to becoming very popular in hyperscale, enterprise and many HPC systems as they are being used to interconnect servers and storage to top-of-rack switches using network adapter cards.  Because the number of cables needed in only one rack can be 32-56 or more, even small performance or cost difference become very important. This is especially true when large data centers deploy tens or hundreds of thousands of cable links.

     

    FEC Considerations

    We’ve heard many data center operators say, “We’ll just use FEC to clean it up”. In the server rack, the use of Forward Error Correction (FEC) circuits  is not recommended at reaches <2 meters per the latest IEEE spec at 25G line – 2 meters is the most common reach for DAC for linking high-value servers!  FEC adds about 120ns delays each direction. For server uplinks, which carries most of the traffic, this delay can really slow things down.  FEC can detect and correct only so many errors before it becomes overloaded and forces a packet retransmit.  Server uplinks are the most important links to maintain error free as they account for 65 percent of the total hardware costs and where all the data is processed. So, keeping them efficient is very important to maintaining high throughput.

     

    InfiniBand systems are much more stringent about signal quality than Ethernet systems as InfiniBand systems are all about minimizing latency. So InfiniBand systems avoid the use of FEC, which can cost 120ns each direction to clean up data errors. In big data centers, the latency delay can add up with all the various interconnects that data has to pass through so minimizing it is of key importance to operators.

     

    DAC Cables - Why Mellanox?

     

    Mellanox Offers 18 LinkX DAC Options!

    Why so many options? The answer is, to optimize costs at every connection point. DAC cables are used in 32-to-56 port top-of-rack switches supporting up to 128 links (4x25G times 32 ports). DAC cables are used in many different configurations linking both new and older equipment.

     

    Mellanox offers six different cabling schemes for interconnecting switches and network adapter to subsystems using SFP and QSFP DAC cables and port adapters.

    1. SFP-SFP cables
    2. QSFP-QSFP cables
    3. QSFP-4SFP breakout cables (a.k.a. hybrid/splitter cables)
    4. QSFP-2QSFP breakout cables
    5. QSA: QSFP-SFP mechanical port adapter used with SFP cables

     

    D3.png

     

     

    To continue doing the math, now multiply times two for 10G and 25G line rates which totals 12 different Ethernet DAC options. Add to that four different InfiniBand DAC QSFP-QSFP DAC cables in HDR200, HDR100EDR (4x25G), FDR (4x14G), FDR10 and QDR (4x10G) rates for a total of eighteen different DAC interconnect options.  There is even one more if you want to include 14G-based Ethernet that uses 14G FDR InfiniBand signaling to transport the Ethernet protocol. Called “VPI”, this is unique to Mellanox and enables 4x14G or 56G Ethernet. At SuperComputing in November 2016 Mellanox announced the 200Gb/s HDR Quantum switches and ConnectX-6 network adapters and HDR200 QSFP56 DAC cables and a HDR200-to-Dual HDR100 1:2 splitter cable.  This brings the total to 18 different ways to create the most cost and performance optimized network links available from Mellanox for InfiniBand and Ethernet protocols.

     

    This figure below shows all the different cabling options for linking rack systems to Top-of-Rack switches.  DAC cables can also link various subsystems to other subsystems directly as well. Shown above in 25G line rates, the switches, network adapters and DAC cables are all available in 10G line rates as well.

     

     

    Screen Shot 2017-01-13 at 10.44.23 AM.png

    DAC Disasters

    Many “inexpensive” DAC cables typically use inferior manufacturing techniques, sample testing  and less electrical shielding in the cable to save costs. This results in many installations having the dreaded, “DAC Disaster”. This is where going “cheap” becomes “really expensive” when factoring in system down-time and chasing down intermittent signal losses and drops from low-quality cables. Link drops have even occurred by simply moving a cable a few inches to see the port number on the switch. Signaling is at the near margin limit, the shielding at the bend in the cable opens up, signal squirts out, and the link drops. Just try to diagnose that problem! Some installations had to completely replace the DAC cabling as a result of “going cheap”. Mellanox DAC cables exceeds the IEEE 802.3bj standard, which means there is a lot of signaling margin left to absorb signal losses and random or burst-mode noise.

    “Some things you just don’t go cheap on.

    Parachutes, eye surgeons, and DAC cables!”

     

    BER Designed for HPC

    It’s been said that nearly anyone can build a 10G DAC cable. But not everyone can build one that works perfectly at blazing fast speeds of 25Gb/s that operate for many years under high temperatures, under all conditions found in modern data centers and one that does not induce bit errors into the data streams. All Mellanox DAC cables are designed to HPC InfiniBand supercomputer BER standards (even our Ethernet DACs) which requires a bit error ratio (BER) of one-bit error in 1015 bits. The IEEE Ethernet industry standard is BER of 1E-12 or one-bit error in 1012 bits transmitted. Expressed another way, that is one bit error every 2.5 seconds (1,440 bit errors per hour). All Mellanox DAC cables are qualified to BER of 1E-15 or one bit error every 42 minutes (1.4 bits per hour). Which cable would you choose to send your electronic pay check over?

    Too many bit errors from poor quality DAC cables means data packets get dropped and the data has to be retransmitted making 100G more like 85G. And most operators won’t even know it!

     

    Backwards Compatible

    Mellanox hardware is also line rate backwards compatible. For example, the Ethernet  32-port SN2700 100G switch or 25G/100G ConnectX-4 network adapter card can run at 25G as well as 10G line rates. The results are similar for InfiniBand equipment with 10G, 14G and 25G line rates. This enables connecting slower or older equipment to newer, faster systems without issues.

     

    Manufacturing Process

    Most DAC manufacturers just build only DAC cables. Mellanox designs and manufactures all its own switch systems, network adapters, DAC and AOC cables and optical transceivers. This vertical integration, “end-to-end” approach ensures everything works together seamlessly. At Mellanox, so called, “Plug & Play” means plug-in and walk-away; not the usual Plug & Play-All-Day needed to get things to work.

    As Mellanox is also a switch and network adapter systems company, we test every DAC cable in real switching and adapter systems with 32-48 cables at a time for extending times under raised temperatures found in actual systems deployment. This is unlike most competitors who typically test one at a time (or sample testing) on a technician’s bench for a few minutes using manual labor and expensive test equipment. All the Mellanox cables are qualified to BER of 1E-15 - thousands of times better than competing Ethernet cable suppliers. So, there is a lot of spare signal margin in the cables rather than, “just barely qualifying and operating the edge” as many competitor cables often do.

     

    Extending the Reach Past 3 Meters

    Mellanox DAC cables typically can reach significantly further than competitor’s DAC cables which often just barely achieve the IEEE standards of 3 meters and at a BER 1E-12. With FEC on the host, Mellanox DAC cables can reach as far as 5 meters (16 feet) which is enough to span 3-4 server racks. Competitor cables with 3 meter limitations have to resort to more expensive AOCs or optical transceivers with fiber cables after 3 meters.

    Note: The use of FEC, what types of FEC, cable thickness, and lengths are currently hotly contested subjects in the 25G industry and the IEEE with no firm decisions yet – so stay tuned!

     

    Summary and Conclusions

    Some buyers attempt to shave a few dollars building “Frankenstein” systems from multiple vendor’s equipment, but they often end up paying big time in qualification, maintenance and reliability. In e-commerce applications, even one minute of down time can be very costly. The combination of high-quality cable materials, Mellanox designed and manufactured cables using real systems testing and at a minimum standard of 1E-15 BER makes Mellanox LinkX cables a preferred choice in high-speed, critical systems applications and at blazing 25G line rates that includes just about all applications!

    DAC cables are a tool in the networking tool kit and it’s important to understand use advantages and limitations. In my next few posts, I’ll talk about AOCs and optical transceivers for connecting servers and switches in breakout (splitter cables) and straight interconnect schemes.

     

    Resources

    Find out more about Mellanox LinkXTM  DAC, AOCs, Optical Transceivers, Ethernet & InfiniBand Networking.

    See our new LinkX Interconnect website! LinkX™ - Mellanox Technologies

    See my Blogs at: http://www.mellanox.com/blog/?s=brad+Smith

     

    Contact Brad Smith, Director of LinkX Team brads for additional questions.