Skip navigation
All Places > HPC > Blog > 2013 > September


September 2013 Previous month Next month

The Linux Foundation, the nonprofit organization is dedicated to accelerating the growth of Linux.  Enterprise companies today seek high-value solutions that deliver on integration and innovation -- solutions that are built on a solid foundation of knowledge and Linux and open source technologies.


Mellanox Technologies supplies InfiniBand and Ethernet interconnect solutions and services for servers and storage, delivering data faster to applications by providing high throughput, low latency and offloading technologies such as RDMA. Mellanox’s fast interconnect portfolio of adapters, switches, software, cables and silicon are well-suited to challenging computing environments such as High-Performance Computing, Web 2.0, cloud, financial services, databases and storage.


“Mellanox’s solutions for Linux servers offer world-leading performance for data center applications,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “The Linux Foundation’s long-term support for Linux, and its role in fast-tracking innovation and collaboration with members is extremely valuable to Mellanox and our customers.”


Read the press release here :


Linux Foundation Welcomes New Members from Enterprise Software, Hardware and Services

High-performance scientific applications typically require the lowest possible latency in order to have the parallel processes be in sync as much as possible.  In the past, this requirement drove the adoption of SMP machines, where the floating point elements (CPU, GPUs) were placed as much as possible on the same board. With the increased demands for higher compute capability, and lowering the cost of adoption for making large scale HPC more available, we have witnessed the increase of clustering as the preferred architecture for high-performance computing.


We introduce and explore some of the latest advancements in the areas of high speed networking and suggest new usage models that leverage the latest technologies that meet the desired requirements of today’s demanding applications.   The recently launched Mellanox Connect-IB™ InfiniBand adapter introduced a novel high-performance and scalable architecture for high-performance clusters.  The architecture was designed from the ground up to provide high performance and scalability for the largest supercomputers in the world, today and in the future.


The device includes a new network transport mechanism called Dynamically Connected Transport™ Service (DCT), which was invented to provide a Reliable Connection Transport mechanism — the service that provides many of InfiniBand’s advanced capabilities such as RDMA, large message sends, and low latency kernel bypass — at an unlimited cluster size.  We will also discuss optimizations for MPI collectives communications, that are frequently used for processes synchronization and show how their performance is critical for scalable, high-performance applications


The presentation posted on this blog was delivered at the NCAR International Computing for the Atmospheric Sciences Symposium (iCAS2013).

High-performance simulations require the most efficient compute platforms. The execution time of a given simulation depends upon many factors, such as the number of CPU/GPU cores and their utilization factor and the interconnect performance, efficiency, and scalability. Efficient high-performance computing systems require high-bandwidth, low-latency connections between thousands of multi-processor nodes, as well as high-speed storage systems.


Mellanox has released "Deploying HPC Clusters with Mellanox InfiniBand Interconnect Solutions".  This guide describes how to design, build, and test a high performance compute (HPC) cluster using Mellanox® InfiniBand interconnect covering the installation and setup of the infrastructure including:


  • HPC cluster design
  • Installation and configuration of the Mellanox Interconnect components
  • Cluster configuration and performance testing