What is nbdX?

Version 5

    This post is intent for InfiniBand/RDMA/Accelio/nbdX newbies that wishes to understand what is nbdX, and what are the advantages to use it.

     

    Note: as PCIe over Fabrics spec was published, nbdX becomes obsolete. For more information about NVMe over Fabrics configuration, refer to HowTo Configure NVMe over Fabrics.

     

    References

     

    What is nbdX?

    nbdX stands for "network block device over Accelio". nbdX is a network block device over Accelio framework. nbdX exploits the advantages of the multi-queue implementation in the block layer as well as the accelio acceleration facilities to provide fast IO to a remote device. nbdX translates IO operations to libaio submit operations to the remote device. Comparing to ISER, iSER and its targets (TGT/LIO/SCST) are SCSI based protocols/devices while nbdX is pure block device, non-SCSI. nbdX presents a regular storage block device to the system that can be used like any other storage (format and mount, use directly, etc.)

     

    What are the nbdX advantages?

    • Uses the RDMA protocol to supply higher bandwidth for block storage transfers (zero time copy behavior).
    • Multi-queue implementation for high performance block driver.
    • Reliable transport (RC connection).
    • Full system parallelism using multiple RDMA channels.
    • Thin IO path - bypass heavy SCSI/iSCSI stacks.
    • Simple and easy to use.

    Block Diagram

    nbdx-nvme-diagram.PNG.png