9 Replies Latest reply on May 3, 2016 3:08 AM by drolfe

    Trunks, pvlans, infiniband world


      Hi, I have some hp c7000 blade chassis with some 40g infiniband (unmanaged) switches


      we want to connect an all ssd array to the network


      i don't really have an understanding of connecting infiniband switches together


      can I add more then one "trunk" between the switches, do I have to worry about loops etc with infiniband ?


      also, how can pvlan off the chassis servers so they can't see each other, only the external storage array


      likely using opensm as a start or are we better off getting a switch with built in subnet manager ?


      Regards Daniel

        • Re: Trunks, pvlans, infiniband world

          Hi Daniel,


          1. You can add links between the switches, but what is the current topology of the network?

          is it just a standalone C7000 blade chassis (each with an IB blade switch) or is there a 1U IB switches as well somewhere?


          2. Using an opensm from MLNX_OFED can do the trick, this will be achieved using the partitions feature

          • Re: Trunks, pvlans, infiniband world

            Hi, I do have another question

            If I have dual links connected to the same IB partition , will these load balance to get to the host ?




            I'm running a Solaris SRP Target and I'm trying to figure our how the clients with reach a dual honed host

              • Re: Trunks, pvlans, infiniband world

                Hi Daniel,

                Assuming that adaptive routing is not enabled, available paths will be statically balanced, between each pair of Local IDs (LIDs).  If only one LID exists on each end of the 2 cables, only one cable will be used.  If a server has 2 ports (2 LIDs), there will be two sets of paths to that server from every other LID in the subnet, and the routing algorithms will try to statically allocate half of the paths to each cable. But the existence of multiple available paths doesn’t mean they’ll be utilized.  Load balancing among two ports on the same server is a function of the Upper Layer Protocol (IPoIB, SRP, etc.) IPoIB doesn’t do load balancing; MPI stacks do..


                See some examples here as well regarding unused links.


                VPI Gateway Considerations



                1 of 1 people found this helpful
                  • Re: Trunks, pvlans, infiniband world

                    Hmm, Interesting, I did notice that the Node GUID was different from the Port GUID's


                    So as long as the target address is the node GUID the IB will load balance ?


                    As an example:

                    # ibstat
                    CA 'mlx4_0'
                            CA type: MT25418
                            Number of ports: 2
                            Firmware version: 2.3.0
                            Hardware version: a0
                            Node GUID: 0x0002c9030002fb04                 !!!!!!!!!!!!!!!!!!!
                            System image GUID: 0x0002c9030002fb07
                            Port 1:
                                    State: Active
                                    Physical state: LinkUp
                                    Rate: 20
                                    Base lid: 2
                                    LMC: 0
                                    SM lid: 1
                                    Capability mask: 0x02510868
                                    Port GUID: 0x0002c9030002fb05         !!!!!!!!!!!!!!!!!!!
                            Port 2:
                                    State: Down
                                    Physical state: Polling
                                    Rate: 10
                                    Base lid: 0
                                    LMC: 0
                                    SM lid: 0
                                    Capability mask: 0x02510868
                                    Port GUID: 0x0002c9030002fb06         !!!!!!!!!!!!!!!!!!!