4 Replies Latest reply on Nov 30, 2017 6:34 AM by bkforbes

    Designing for high availability

      We are in the early planning stages for a high bandwidth interconnect between servers. The software stack will be OFED and GPFS on Linux. GPFS can use RDMA.

      The idea is to have two IB switches and each server to connect to both. I've seen such configurations with normal linux bonding (mode=1, active/passive).

      - How does this works out if RDMA is used ?

      - Is there a possibility to have a active/active configuration at least for RDMA ?

        • Re: Designing for high availability
          ferbs

          Hi,

           

          If you're planning to use GPFS over RDMA you'll most likely have to use IPoIB as the upper layer interface. IPoIB can use the standard linux bonding driver with an active/passive configuration, as of now I'm not familiar with IPoIB working as active/active.

           

          This is a very common configuration with GPFS.

            • Re: Designing for high availability

              I'am familiar with Infiniband, GPFS and RDMA is a single switch environment. This is straightforward and you can define your IPoIB interfaces as bond to fail over when individual links fail. I have also seen configurations where two IB switches are used and bonding to fail over between the two, but in a active/passive arrangement. In both cases the resulting configuration is similar to an analogue Ethernet configuration.

               

              I'm looking for a configuration similar to two Ethernet switches, stacked together, with the hosts using LACP to manage the links. This allows to use the full bandwidth of the links (link aggregation) and provides fail-over in case of switch failure.