Frequently Asked Questions : GPUDirect RDMA

Version 2

    Overview

     

    The next major advancement in GPU-GPU communications is GPUDirect RDMA. This new technology provides a direct P2P (Peer-to-Peer) data path between the GPU Memory directly to/from the Mellanox HCA devices. This provides a significant decrease in GPU-GPU communication latency and completely offloads the CPU, removing it from all GPU-GPU communications across the network.

     
    Please note:  Any previous versions of  OFED, Mellanox OFED, or any previous patches applied with the ALPHA releases of GPUDirect RDMA should not be installed on your system and may cause the installation to fail, or work
    improperly.

     

    Frequently Asked Questions

     

    1. What is GPUDirect, and GPUDirect RDMA?
      GPUDirect is a term for improving interoperability with NVIDIA GPUs and third-party devices, such as Mellanox
      ConnectX-3 or Connect-IB devices.  GPUDirect RDMA is a feature introduced in Kepler-class GPUs and CUDA 5.0
      that enables a direct path for communication between the GPU and a peer device using standard features of PCI Express. The devices must share the same upstream root complex. A few straightforward changes must be made to device drivers to enable this functionality with a wide range of hardware devices. This document introduces the technology and describes the steps necessary to enable an RDMA for GPUDirect connection to NVIDIA GPUs on Linux.
    2. Is GPUDirect RDMA supported on Windows platforms?
      At this time, GPUDirect RDMA is only supported on Linux.
    3. What is the difference between the GPUDirect RDMA Alpha, Alpha 2.0 and the BETA releases?
      GPUDirect RDMA was initially developed with the community OFED 1.5.4 release and a set of patches provided by Mellanox that was only targeted with RHEL 6.2.  This implementation was a development vehicle only.  The GPUDirect RDMA Alpha 2.0 patches were provided to support capabilities of MLNX_OFED 2.0 and additional support was provided for RHEL 6.3 and RHEL 6.4 development platforms.   The latest BETA release represents a significant change in driver architecture between supported devices and is more of generic and standard “plug-in” implementation.  This basically means there is the potential for more supported devices to take advantage of the RDMA network to move data directly between different co-processors, storage or other accelerators in the future.
    4. Will GPUDirect RDMA run over any generic Ethernet or any network adapter?
      No, GPUDirect RDMA is supported only by Mellanox VPI and InfiniBand devices, such as ConnectX-3 or Connect-IB. 
    5. Will GPUDirect RDMA run over RoCE supported adapters?
      Yes, GPUDirect RDMA is supported by Mellanox VPI devices in Ethernet mode and will also take advantage of  RDMA over
      Converged Ethernet protocol.
    6. Will previous versions of GPUDirect 1.0 developed software still run?
      GPUDirect 1.0 allowed GPU and HCA to register and share data buffers on the host memory (i.e. no need to copy the data buffers when switching ownership between HCA and GPU).  Initially it was supported as a kernel patch  (as well as special
      MLNX_OFED version). Later on, it was supported into the kernel (w/o the need for special MLNX_OFED).  Most software
      developed on MPI should work seamlessly with the latest version of GPUDirect RDMA.  There are however,  cases where some software may have been developed directly to the GPUDirect RDMA alpha patches and take advantage of
      lower level verbs level implementation; such as supported MPIs.  These applications and MPIs would need to
      provide slight changes to take advantage of the new plug-in architecture of GPUDirect RDMA BETA.  Please ensure you
      are using the latest available MPIs released after 30 December 2013.
    7. What MPIs are currently providing support for GPUDirect RDMA?
      At the time of this published FAQ, only MVAPICH2-GDR 2.0b is supported from Ohio State University (http://mvapich.cse.ohio-state.edu/overview/mvapich2), however contributions to the OpenMPI standard will also be supported in the 1.7.4 branch (http://www.open-mpi.org).
    8. Where can I find out more information about GPU Direct RDMA support in Open MPI?

            This link (http://www.open-mpi.org/faq/?category=building#build-cuda) has information about configuring Open MPI

     

       9.  I am having performance issues after installing the software.

            We can distinguish between three situations, depending on what is on the path between the GPU and the third-party device:

            PCIe switches only, single CPU/IOHCPU/IOH <-> QPI/HT <-> CPU/IOH

             The first situation, where there are only PCIe switches on the path, is optimal and yields the best performance. The second
             one, where a single CPU/IOH is involved, works, but yields worse performance. Finally, the third situation, where the

    path traverses a QPI/HT link, doesn't work reliably.   Tip: lspci can be used to check the PCI topology: $ lspci -t