HowTo Configure nbdX

Version 35

    This post shows how to install and setup nbdX client and server.

     

    Note: as PCIe over Fabrics spec was published, nbdX becomes obsolete. For more information about NVMe over Fabrics configuration, refer to HowTo Configure NVMe over Fabrics.

     

    References

     

    Prerequisites

    • Install Accelio latest version (for_next branch - click here) on both client and server.
    • Install kernel 3.13.1 or higher on the client side (3.19 is recommended)
    • In case you run InfiniBand fabric make sure that SM is running on the network.
    • Make sure that the client and server are on the same subnet

     

    Follow HowTo Install Accelio for nbdX for detailed procedure.

     

    Overview

    The order to the configuration below is very important.This is the high level configuration flow:

    1. Run the raio_server (on the nbdx server), listening to specific port and IP address.

    2. Create a host on the nbdx client that matches the currently running server IP and port.

    3. Create a device on the nbdx client that matches host you created (that matches the server IP and port).

    4. Run fio with the configured device.

     

    For details, follow the configuration below.

     

    Configuration

    In this example we use the following IP configuration

    • Server IP: 11.1.1.1/24 (eth5)
    • Client IP: 11.1.1.2/24 (eth5)

     

    Server example

    1. create a 100M file that would be exposed as block device for nbdx client

    # touch /tmp/bs1

    # truncate -s +100M /tmp/bs1

     

    2. start server application over 11.1.1.1 ip and 5555 port for the eth5 interface

     

    3. Tune the server:

     

    3.1 Stop the irq balancer

    # service irqbalance stop

    3.2 Assign the affinity to the near Numa node:

     

    Find the NUMA node connected to the port:

    # cat /sys/class/net/eth5/device/numa_node                      ; Check the NUMA Node number

    0

    Set the IRQ affinity on this NUMA for the specific port:

    # set_irq_affinity_bynode.sh 0 eth5                             ;  we are assuming that eth5 is close to NUMA 0 and the cpumask for eth5 is ff

    Discovered irqs for eth5: 45 46 47 48 49 50 51 52

    -------------------------------------

    Optimizing IRQs for Single port traffic

    -------------------------------------

    Assign irq 45 core_id 0

    Assign irq 46 core_id 1

    Assign irq 47 core_id 2

    Assign irq 48 core_id 3

    Assign irq 49 core_id 0

    Assign irq 50 core_id 1

    Assign irq 51 core_id 2

    Assign irq 52 core_id 3

    done.

     

    4. Run the raio_server:

    Note:

    1. You can add "-e 1" flag to run the server in polling mode instead of in interrupt mode. It is recommended in case you wish to improve latency.

    2. You can add "-n $(nproc)" to open thread per cpu (default is 6 threads).

    3. "-p" is the port number.

    # raio_server -a 11.1.1.1 -p 5555 -c ff -t rdma -f 0       

     

    [0] affinity:0/8

    [1] affinity:1/8

    [2] affinity:2/8

    [3] affinity:3/8

    [4] affinity:4/8

    [5] affinity:5/8

     

    Client example

    1. mount configfs

    # mount -t configfs none /sys/kernel/config

     

    2. Load nbdx kernel module

    # modprobe nbdx

     

    3. Login to the nbdx server on the configured port. For that you will need to use the command to create a host.

    Creating a host means that the client will be connected to the nbdx server on a given IP address and port.

    Note: The "-i" flag is the host ID, you will need to use the same host ID in the step.

    # nbdxadm -o create_host -i 0 -p "11.1.1.1:5555"

     

    Note: Do not try to create multiple hosts with the same ID (reboot might be needed).

     

    Is it possible to create multiple hosts from each client to the same server using different ports (same IP).

     

    4. Create a block device, use the flag "-f" with the file/block as exist in the server.

    In this example, we created the file: /tmp/bs1.

    The "-d" is the device ID while the "-i" is the host ID used earlier.

    # nbdxadm -o create_device -i 0 -d 0 -f "/tmp/bs1"

    The device ID will be used in the fio command below.

     

    Note: Do not try to create multiple devices with the same ID (reboot might be needed).

     

    5. Verify that the device was created:

    # nbdxadm -o show_all_devices

    DEVICE: nbdx0, FILE: /tmp/bs1, HOST: nbdxhost0, PORTAL: 11.1.1.1:5555

     

    # ls /dev/nbdx0

    /dev/nbdx0

     

    In this example:

    • The device id configured is 0 (nbdx0)
    • The host id configured is 0 (nbdxhost0)

     

    In this stage, a nbdx type block device (/dev/nbdx0) is available and ready for data transfer.

     

    6. Run FIO

     

    Make sure you use the /dev/nbdx0 device on the fio command:

     

    #taskset -c 1 fio --rw=randread --bs=1k --numjobs=1 --iodepth=128 --runtime=300 --time_based --loops=1 --ioengine=libaio --direct=1 --invalidate=1 --fsync_on_close=1 --randrepeat=1 --norandommap --exitall --name task1 --filename=/dev/nbdx0

     

    For more info about the fio, refer to fio man page.

     

     

    Tear-Down the setup

    1. Delete the device on the client, and then delete the host (use the same IDs):

    # nbdxadm -o delete_device -d 0

    # nbdxadm -o delete_host -i 0

     

    2. Stop the raio_server on the nbdx server (CTL-C)

     

    if you wish to restart again the service with different parameters, make the desired changes and then:

    1. Start the raio_server on the nbdx server.

    2. Create host

    3. Create device

    4. run FIO.