1 Reply Latest reply on Feb 1, 2016 8:16 AM by aviap

    Infiniband's RDMA capability on Windows 8.1 with Intel MPI

    seongyunko

      We are deploying a cluster server (Windows 8.1 is installed) equipped with Mellanox Infiniband (Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE])

       

      I am using Intel MPI library and trying to use RDMA (or DAPL) to take advantage of full bandwidth of infiniband.

       

      I installed Infiniband driver from Mellanox's webpage successfully.

       

      1. I cannot run my MPI benchmark application with DAPL enabled. I observe no *good* numbers from the benchmark using TCP/IP (IPoIB) (~ 300MB/sec over MPI)

       

      2. In a test where a machine as a server and another machine as a client, I can see they can send/receive messages upto 2~3GB/sec (which is the exactly what I want...)

       

      If you have any knowledge on it , please share them with me