The latest advancement in GPU-GPU communications is GPUDirect RDMA. This new technology provides a direct P2P (Peer-to-Peer) data path between the GPU Memory directly to/from the Mellanox HCA devices. This provides a significant decrease in GPU-GPU communication latency and completely offloads the CPU, removing it from all GPU-GPU communications across the network.


I am pleased to announce the availability of GPUDirect RDMA Beta which is now available for download directly from the Mellanox website.


            Please note : Any previous versions
of OFED, MLNX_OFED, or any previous patches applied with the ALPHA releases of
GPUDirect RDMA should not be installed on your system.


Currently, the BETA release for GPUDirect RDMA technology requires the following pre-requisites to be installed and operational:

GPU Adapters supported (required): NVIDIA® Tesla™ K-Series (K10, K20, K40) GPU
Drivers supported (required): NVIDIA Linux x84 (AMD64/EM64T) Display Driver , Version 331.20 or later

Mellanox Interconnect Adapters supported (required): ConnectX-3, ConnectX-3 Pro or Connect-IB


Mellanox OFED (required): Mellanox OFED 2.1 or later.

Development tools (optional): NVIDIA® CUDA® 5.5 or later


NOTE : For those using the current evaluation version of MVAPICH2 for GPUDirect RMDA, you will also need the latest RPM of MVAPICH2-GDR 2.0b, which will also be publicly available directly from the OSU MVAPICH website in the next coming days.