As a followup to the earlier post on Test Driving GPUDirect RDMA with MVAPICH2-GDR and Open MPI, we have an HPC molecular dynamics (MD) simulation application called HOOMD-blue that demonstrates the benefits of using the GPUDirect RDMA technology.

 

HOOMD-blueHOOMD-blue stands for Highly Optimized Object-oriented Many-particle Dynamics -- Blue Edition, which performs general purpose particle dynamics simulations on a single workstation, taking advantage of NVIDIA GPUs to attain a level of performance equivalent to many processor cores on a fast cluster. It is a free and open source code and its development effort is led by the Glotzer group at the University of Michigan.

 

The HPC Advisory Council has performed benchmarking studies with HOOMD-blue. The benchmarking study has highlighted the benefits of using GPUDirect RDMA available in the Connect-IB FDR InfiniBand adapter. The GPUDirect RDMA could also work for the ConnectX-3 FDR InfiniBand adapter as well.

 

The performance improvement can be seen on a small 4-node cluster at the HPC Advisory Council, as well as on the 96 nodes of the Wilkes cluster at the University of Cambridge. The graph below shows a 20% improvement of GPUDirect RDMA at 4 nodes using the 16K particles case:

 

 

It is also demonstrated that by deploying GPUDirect RDMA on HOOMD-blue, the scalability performance on the Wilkes cluster at the University of Cambridge is able to surpass ORNL Titan cluster by up to 114% at 32 nodes using the same input dataset, despite the slightly slower NVIDIA K20 GPUs are being run on the Wilkes cluster.

 

 

The complete performance benchmarking study of HOOMD-blue can be found at this HPC Advisory Council presentation:

http://hpcadvisorycouncil.com/pdf/HOOMDblue_Analysis_and_Profiling.pdf