3 Replies Latest reply on Apr 30, 2013 9:01 PM by justinclift

    Feasibility of using Connect_X3 in HP SL230c

      I am trying to get some insights on how to maximize I/O performance bexond 2x10 GbE per server for HP SL230c servers.

       

      Since one 10GbE leg will be tied to a storage network, and since I expect to have to run I/O rich workloads, I aim for additional I/O capacity beyond just one single extra 10GbE.

       

      Thus I am considering to either use Mellanox mellanox.com/related-docs/p  or Interface Masters low Profile Version of the Niagara 32714L NIC, Niagara 32714L - Quad Port Fiber 10 Gigabit Ethernet NIC Low Profile PCI-e Server Adapter Card

       

      However, several questions arise

      a) Is any of the Connect_X3 cards low Profile enough to fit and work in HP SL230c servers?

      b) If yes, is it the single 1G/10G/40G, or also the dual 1G/10G/40G

      c) If I make the servers mostly deal with I/O, how many 10G can a HP SL230c dual 8-core saturate - or do I already hit PCIe bus limits with the second 10 GbE?

      d) The original HP 2x10GbE cards use approx. 11W - which energy needs do I have to expect for the single and dual port Mellanox cards, since I certainly want to avoid getting into any thermal trouble

      e) What kind of performance have people achieved for VM to VM DC traffic in a DC under OpenStack/KVM using the Mellanox SR-IOV implementation (is there any such performance data available anywhere regarding SR-IOV - even if it may be the usual VMware stuff)?

      f) How can SR-IOV enabled ports and their bandwidth be discovered in OpenStack when creating networks between VMs?

        • Re: Feasibility of using Connect_X3 in HP SL230c

          Let me see if I can help you here:

           

          a) Is any of the Connect_X3 cards low Profile enough to fit and work in HP SL230c servers?

          • Yes, see next note from HP website

          b) If yes, is it the single 1G/10G/40G, or also the dual 1G/10G/40G

          • HP InfiniBand FDR/EN 10/40Gb Dual Port 544FLR-QSFP Adapter
            • NOTE:
              • Dual port ConnectX-3 FDR/QDR IB or 40/10GbE. No IB Enablement Kit required.
              • 649282-B21
          • HP InfiniBand QDR/EN 10Gb Dual Port 544FLR-QSFP Adapter
            • NOTE:
              • Dual port ConnectX-3 QDR IB or 10GbE. No IB Enablement Kit required.
              • 649283-B21
          • HP InfiniBand FDR/EN 10/40Gb Dual Port 544QSFP Adapter
            • NOTE:
              • Dual port ConnectX-3 FDR/QDR IB or 40/10GbE. No IB Enablement Kit required.
              • 649281-B21

          c) If I make the servers mostly deal with I/O, how many 10G can a HP SL230c dual 8-core saturate - or do I already hit PCIe bus limits with the second 10 GbE?

          • PCIe bus limits on x8 Gen 3 are roughly 54Gbps, so 2 x 10GbE will not come close.  You could run 1 @ 10GbE and 1 port @ 40GbE to give you that, or run 1 @ FDR InfiniBand, your choice, we support all these configurations.  Also, we have 40GbE Ethernet switches (SX1036 for example) that give you 10GbE and 40GbE, @ 36 ports of 40GbE and up to 64 ports of 10GbE.  Also, check our SX1024 that do the splitting for you already with 48 ports of 10GbE and 12 ports of 40GbE - ALL ON ONE SILICON !

          d) The original HP 2x10GbE cards use approx. 11W - which energy needs do I have to expect for the single and dual port Mellanox cards, since I certainly want to avoid getting into any thermal trouble

          • We have the lowest power in the industry, I believe under 9 or 10W

          e) What kind of performance have people achieved for VM to VM DC traffic in a DC under OpenStack/KVM using the Mellanox SR-IOV implementation (is there any such performance data available anywhere regarding SR-IOV - even if it may be the usual VMware stuff)?

          • We can show you very nice performance, near line rate, I'd have to email you that data under NDA for now as our drivers will be GA soon, but not published quite yet.

          f) How can SR-IOV enabled ports and their bandwidth be discovered in OpenStack when creating networks between VMs?

          • I have a whole presentation about Quantum plugin, embedded switch, etc.  Please email me @ jeffm@mellanox.com to continue our conversation.
          • Re: Feasibility of using Connect_X3 in HP SL230c
            justinclift

            theo19 wrote:

             

            e) What kind of performance have people achieved for VM to VM DC traffic in a DC under OpenStack/KVM using the Mellanox SR-IOV implementation (is there any such performance data available anywhere regarding SR-IOV - even if it may be the usual VMware stuff)?

             

            As a thought, are you a Red Hat customer or would you be looking to use RHEL as the OS?

             

            Asking because Red Hat has some super in depth people in the Performance and Cloud team(s), who might already have performance numbers and configuration details for them.

             

            So, if you have an existing relationship to Red Hat, hassle your contact person to go and ask the perf team for that info (if it exists, etc).