As flash storage has become increasingly available at lower and lower prices, many organizations are leveraging flash’s low-latency features to boost application and storage performance in their data centers. Flash storage vendors claim their products can increase application performance by leaps and bounds, and a great many data center administrators have found that to be true. But what if your flash could do even more?
One of the main features of flash storage is its ability to drive massive amounts of data to the network with very low latencies. Data can be written to and retrieved from flash storage in a matter of microseconds at speeds exceeding several gigabytes per second, allowing applications to get the data they need and store their results in record time. Now, suppose you connect that ultra-fast storage to your compute infrastructure using 1GbE technology. A single 1GbE port can transfer data at around 120MB/s. For a flash-based system driving, say, 8GB/s of data, you’d need sixty-seven 1GbE ports to avoid bottlenecking your system. Most systems have only eight ports available, so using 1GbE would limit your lightning-fast flash to just under 1GB/s, an eighth of the performance you could be getting. That’s a bit like buying a Ferrari F12berlinetta (max speed: >211 mph) and committing to drive it only on residential streets (speed limit: 25 mph). Sure, you’d look cool, but racing neighborhood kids on bicycles isn’t really the point of a Ferrari, is it? Upgrade that 1GbE connection to 10GbE, and you can cover your full Flash bandwidth with seven ports, if your CPU can handle the increased TCP stack overhead and still perform application tasks. In terms of our vehicular analogy, you’re driving the Ferrari on the highway now, but you’re still stuck in third gear. So, how do you get that Ferrari to the Bonneville Salt Flats and really let loose?
Take one step further in your interconnect deployment and upgrade that 10GbE connection to a 40GbE with RDMA-over-Converged-Ethernet (RoCE) or 56Gb/s FDR InfiniBand connection. Two ports of either protocol will give you full bandwidth access to your flash system, and RDMA features mean ultra-low CPU overhead and increased overall efficiency. Your flash system will perform to its fullest potential, and your application performance will improve drastically. Think land-speed records, except in a data center.
So, if your flash-enhanced application performance isn’t quite what you expected, perhaps it’s your interconnect and not your flash system that’s underperforming.
Find out more about the about RoCE and InfiniBand technologies and how they can enhance your storage performance: http://www.mellanox.com/page/storage and http://www.mellanox.com/blog/2013/01/rdma-interconnects-for-storage-fast-efficient-data-delivery/