Mellanox HPC Advantages

Version 2

    Here is a list of several external posts discussing the value of InfiniBand in the HPC world, comparing to other technologies.

     

     

    OrionX Reports Position InfiniBand as the Leading HPI Technology and Mellanox the Leading Vendor

    In this post, Peter ffoulkes from OrionX outlines a series of new reports that show how InfiniBand continues to dominate the market for High Performance Interconnects.

    The OrionX Constellation reports published June 29th address the evolution, environment, evaluation and excellence ratings for the High Performance Interconnect (HPI) market. Defined as the very high end of the networking equipment market where high bandwidth and low latency are non-negotiable, HPI technologies support the most demanding workloads that are typical of extreme-scale systems in high performance computing (HPC), artificial intelligence, cloud computing, and web-scale deployments.

    To read more click here: http://insidehpc.com/2016/07/orionx-reports-position-infiniband-as-the-leading-hpi-technology-and-mellanox-the-leading-v…

     

    InfiniBand Enables Intelligent Networks

    In this post, Gilad Shainer, VP of Marketing,  Mellanox writes that the network is the key to future scalable systems. In the world of high-performance computing, there is a constant and ever-growing demand for even higher performance. Technology providers have worked ceaselessly to keep up with that demand, with each new generation of faster, more reliable, and more efficient systems. To read more click here: http://insidehpc.com/2016/01/infiniband-enables-intelligent-networks/

     

    Slidecast: Advantages of Offloading Architectures for HPC

    In this slidecast, Gilad Shainer, VP of Marketing,  Mellanox describes the advantages of InfiniBand and the company’s off-loading network architecture for HPC. “The path to Exascale computing is clearly paved with Co-Design architecture. By using a Co-Design approach, the network infrastructure becomes more intelligent, which reduces the overhead on the CPU and streamlines the process of passing data throughout the network. A smart network is the only way that HPC data centers can deal with the massive demands to scale, to deliver constant performance improvements, and to handle exponential data growth.” To learn more click here: Interconnect Your Future with InfiniBand - YouTube

    Interview: Why Co-design is the Path Forward for Exascale Computing

    In this video, Gilad Shainer, VP of Marketing,  Mellanox describes how co-design is the path forward to exascale computing. “Co-Design is a collaborative effort among industry thought leaders, academia, and manufacturers to reach Exascale performance by taking a holistic system-level approach to fundamental performance improvements. Co-Design architecture enables all active system devices to become acceleration devices by orchestrating a more effective mapping of communication between devices in the system. This produces a well-balanced architecture across the various compute elements, networking, and data storage infrastructures that exploits system efficiency and even reduces power consumption.” To learn more click here: http://insidehpc.com/2016/03/co-design-exascale/

     

    The Ultimate Debate – Interconnect Offloading Versus Onloading

    The high performance computing market is going through a technology transition – the Co-Design transition. As has already been discussed in many articles, this transition has emerged in order to solve the performance bottlenecks of today’s infrastructures and applications, performance bottlenecks that were created by multi-core CPUs and the existing CPU-centric system architecture. To learn more click here: https://www.hpcwire.com/2016/04/12/interconnect-offloading-versus-onloading/

     

    Offloading vs. Onloading: The Case of CPU Utilization

    One of the primary conversations these days in the field of networking is whether it is better to onload network functions onto the CPU or better to offload these functions to the interconnect hardware. Onloading interconnect technology is easier to build, but the issue becomes the CPU utilization; because the CPU must manage and execute network operations, it has less availability for applications, which is its primary purpose. To learn more click here: https://www.hpcwire.com/2016/06/18/offloading-vs-onloading-case-cpu-utilization/

     

    Ranking High Performance Interconnects

    With the increasing adoption of scale-out architectures and cloud computing, high performance interconnect (HPI) technologies have become a more critical part of IT systems. Today, HPI represents its own market segment at the upper echelons of the networking equipment market, supporting applications requiring extremely low latency and exceptionally high bandwidth. To learn more click here: http://www.nextplatform.com/2016/07/14/ranking-high-performance-interconnects/

     

     

    Co-Design Architectures - Designing Machines Around Problem

    Full Guide to Co-Design Architectures written by Doug Eadline is attached.