Mellanox Adapters - Comparison Table

Version 15

    This post shows the main differences and feature support on the latest Mellanox adapters.

    This post is basic and aimed for FAE and IT managers.

     

    References

     

    ClassFeatureConnectX-3ConnectX-3 ProConnectX-4 LxConnectX-4Connect-IBReferences and Notes
    InterfacePCIex8 Gen3x8 Gen3x8 Gen3x8, x16 Gen3x8, x16 Gen3
    InterfacePort/Speed options2 ports of 10/40/56GbE2 ports of 10/40/56GbE

    2 ports of 10/25GbE

    1 port of 40/50GbE

    2 ports of 100/56/50/40/25/10GbE2 ports of FDR InfinBandNote: There various of adapter cards. Some OPNs for example could be speed limited, e.g.ConnectX-4 FDR (that won’t link up EDR) or Connect-IB that is x8 only and not x16.
    InterfaceCoherent Accelerator Processor Interface (CAPI)---SupportedN/A

    IBM - Coherent Accelerator Processor Interface (CAPI)

    http://openpowerfoundation.org/blogs/interconnect-your-future-mellanox-100gb-edr-capi-infiniband-and-interconnects/

    RDMA and RoCERDMA/RoCERDMA, RoCEv1RDMA, RoCEv1,2RoCEv1,2

    RDMA,

    RoCEv1,2

    RDMA

    RDMA/RoCE Solutions

    Note: ConnectX-4 Lx is Ethernet only (RoCE) while Connect-IB is InfiniBand only (RDMA).

    RDMA and RoCERoCE Congestion Control-SupportedSupportedSupportedN/A

    RDMA/RoCE Solutions

    Understanding RoCEv2 Congestion Management

    RoCEv2 CNP Packet Format Example

    HowTo Configure RoCE Congestion Control for Windows 2012

    HPCCore DirectSupportedSupportedSupportedSupportedSupported

    Exploited by FCA and extended by SHARP.

    http://www.mellanox.com/related-docs/whitepapers/TB_CORE-Direct.pdf

    HPCPeerDirectSupportedSupportedSupportedSupportedSupported

    GPU Direct Uses PeerDirect.

    http://www.mellanox.com/page/products_dyn?product_family=116

    HPC

    Dynamically Connected Transport (DCT)--SupportedSupportedSupportedhttp://www.mellanox.com/related-docs/applications/SB_Connect-IB.pdf
    CPU OffloadsStateless Ethernet offloadSupportedSupportedSupported (LRO, LSOv2)Supported (LRO, LSOv2)N/A

    LRO standard for Large Receive Offload. it increases inbound throughput of high-bandwidth network connections by reducing CPU overhead. It works by aggregating multiple incoming packets from a single stream into a larger buffer before they are passed higher up the networking stack, thus reducing the number of packets that have to be processed. LRO is available in kernel versions < 3.1 for untagged traffic.

     

    LSO stands for Large Send offload.

     

    See MLNX_OFED User Manual for more info.

    CPU OffloadsRSS (MAC, VLAN 5 Tuple)SupportedSupportedSupported +Supported +N/AHowTo Configure RSS on ConnectX-3 Pro for Windows 2012 server
    VirtualizationSR-IOVSupportedSupportedSupportedSupportedSupportedVirtualization Solutions
    VirtualizationMulti Host---SupportedN/Ahttp://www.mellanox.com/page/products_dyn?product_family=210&mtag=multihost

    Overlay Network

    (VXLAN/NVGRE)

    Hardware offload-SupportedSupportedSupportedN/AVirtualization Solutions

    Overlay Network

    (VXLAN/NVGRE)

    Encap/Decap--SupportedSupportedN/AThe difference here between ConnectX-3 Pro and ConnectX-4 is that the Encapsulation and Decapsulation for the overlay protocol (e.g. VXLAN/NVGRE) is done in HW in case of ConnectX-4.
    StorageErasure Coding Offload--SupportedSupportedN/A

    Reed Solomon Erasure Coding hardware offload is supported by the adapters.

    Verbs API are available.

    Understanding Erasure Coding Offload

    StorageT10/DIF Signature Handover---SupportedSupportedHowTo Enable T10-PI (T10-DIF) Data Integrity Protection in iSER with LIO Target
    Cloud IntegrationMirantis Fuel

    Supported Fuel 7.0/8.0

    Supported Fuel 7.0/8.0Supported Fuel 8.0Supported Fuel 8.0

    Cloud Solutions