HowTo Configure iSER Block Storage for OpenStack Cloud with Mellanox ConnectX-3 Adapters

Version 23

    The focus of this document is to show an example of a storage configuration (SSD and SATA) running over iSER transport in OpenStack cinder environment.

    In this example we'll use Supermicro server with internal SSD drives and JBODs with SATA drives.

    We'll create and use SW RAID for SSD drives and Adaptec RAID controller for SATA drives.

     

    Note: This post is outdated. New instructions can be found here:

     

     

    References:

     

     

    Storage Server: BOM

    2U Storage Server :

    • System SSG-2027R-AR24NV (click here for additional details)
    • 64GB SATADOM  for the OS installation
    • MEM 128 GB @ 1600 MHz
    • CPU: 2 x E5-2637 v2 (total of 8 cores @ 3.5GHz each)
    • 24 x 480 GB Intel SSD DC S3500 series drives ( SATA 6 Gb/s)
    • 3 x LSI Logic SAS3008 SAS adapters to connect SSD drives
    • Adaptec RAID 78165 6G SAS/PCIe RAID controller to connect JBODs
    • Mellanox ConnectX-3 Pro dual port network card

    36.PNG.png

    JBODs

    • 2 x JBOD (CSE-847E26-RJBOD1) - (click here for additional details)
    • 90 x 2TB 7.2k RPM SATA drives (JDD-A2000-HUS724020ALS64)
    • 4 x 1m external Mini SAS HD cables (CBL-SAST-0548)

    35.PNG.png

     

    Hardware Configuration:

    There are two types of drives in this storage setup

    • SSD (onboard)
    • SATA (via JBOD externally)


    SSD

    24 SSD drives should be installed into front drive bays of 2U server.

    2 rear bays shall left free.

    SATA

    Each JBOD shall be connected to Adaptec RAID controller by 2 SAS cables: one to front backplane and one to rear backplane.

    Capture.JPG.jpg

    When connected, you can boot the server and configure RAID from available SATA drives.

    During server boot you'll  be prompted to press CTRL+A to enter Adaptec controller configuration.

    It is recommended to configure the 90 SATA disks as follows:

      • 86 drives configured in RAID-10
      • 4 drives configured as hot spare

    Other configuration options are possible based on the actual customer needs.

     

    Software

    OS: In this setup, we used RHEL6.4 as the OS to run on the storage server. OS will be installed on 64GB SATADOM so all SSD and SATA drives will be used for storage needs.

    OpenStack distribution: In this setup we used Mirantis Fuel 4.1

     

    Network Connectivity for Mirantis Fuel 4.1

    The server is connected to the following networks

    • Admin (PXE) Network (eth0 - 1GbE)
    • Public Network (eth1 - 1GbE)
    • Storage Network (eth2 - 40/56GbE) - via ConnectX-3 Pro adapter
    • Management Network (eth3 - 40/45GbE) - via ConnectX-3 Pro adapter

     

    Note: when using Fuel 5.1 the Management and Storage networks can be used over the same network port (e.g. eth2).

    OpenStack Installation:

    Install OpenStack environment via Mirantis Fuel, select this server as a storage node.

    Note: During the installation no SSD or SATA drives pool will be seen, as the driver is not yet installed on server.

    OS will be installed on a part of the 64GB SATADOM. The rest will be configured as space for cinder volume.

     

    SSH Access to Storage node (via Fuel):

    To access the storage node via SSH follow the next steps:

      1. Login to the Fuel master via SSH (public network)

      2. Run fuel node command that prints all nodes

    [root@fuel ~]# fuel node

    id | status | name             | cluster | mac               | roles           | pending_roles | online

    ---|--------|------------------|---------|-------------------|-----------------|---------------|-------

    3  | ready  | Untitled (44:58) | 1       | 0e:22:72:85:70:49 | [u'compute']    | []            | True

    1  | ready  | Untitled (43:68) | 1       | 66:db:a9:37:a6:47 | [u'controller'] | []            | True

    2  | ready  | Untitled (43:F8) | 1       | 36:9d:e9:9a:f9:42 | [u'compute']    | []            | True

    6  | ready  | Untitled (6F:1C) | 1       | f6:cb:16:e2:fe:4f | [u'compute']    | []            | True

    5  | ready  | Untitled (44:40) | 1       | ae:a9:05:2e:b1:45 | [u'compute']    | []            | True

    7  | ready  | Untitled (DB:E2) | 1       | 3e:90:cf:fd:bf:42 | [u'cinder']     | []            | True

    4  | ready  | Untitled (44:84) | 1       | 86:9a:85:07:35:40 | [u'compute']    | []            | True

            


      3. Look for the Storage node id  by role 'cinder'. Login from the fuel node to the storage node via the

        command ssh node-<id>

    [root@fuel ~]# ssh node-7

    [root@node-7 ~]#

            

     

    Install missing packages:

    Now let's install some packages required to complete installation

    [root@node-7 ~]# yum install parted mdadm unzip

           

     

    LSI Driver Installation:

    After OpenStack installation completed, download and install the LSI HBA driver (click here for the driver package). Install x86_64 driver for CentOS 6.4.

     

    NOTE: if You already have driver downloaded You can transfer it to storage node through Fuel Master server

    1. Use scp/WinSCP to transfer driver package to Fuel server
    2. Login to Fuel Master node
    3. Use scp to transfer driver package to storage node

     

    After driver installation reboot the server. After reboot You shall see all 24 drives available. Below example uses disk size of 480.1GB to filter SSD drives.

    [root@node-7 ~]# fdisk -l  | grep 480.1 | sort

    Disk /dev/sda: 480.1 GB, 480103981056 bytes

    Disk /dev/sdb: 480.1 GB, 480103981056 bytes

    Disk /dev/sdc: 480.1 GB, 480103981056 bytes

    Disk /dev/sdd: 480.1 GB, 480103981056 bytes

    Disk /dev/sde: 480.1 GB, 480103981056 bytes

    Disk /dev/sdf: 480.1 GB, 480103981056 bytes

    Disk /dev/sdg: 480.1 GB, 480103981056 bytes

    Disk /dev/sdh: 480.1 GB, 480103981056 bytes

    Disk /dev/sdi: 480.1 GB, 480103981056 bytes

    Disk /dev/sdj: 480.1 GB, 480103981056 bytes

    Disk /dev/sdk: 480.1 GB, 480103981056 bytes

    Disk /dev/sdl: 480.1 GB, 480103981056 bytes

    Disk /dev/sdm: 480.1 GB, 480103981056 bytes

    Disk /dev/sdn: 480.1 GB, 480103981056 bytes

    Disk /dev/sdo: 480.1 GB, 480103981056 bytes

    Disk /dev/sdp: 480.1 GB, 480103981056 bytes

    Disk /dev/sdq: 480.1 GB, 480103981056 bytes

    Disk /dev/sdr: 480.1 GB, 480103981056 bytes

    Disk /dev/sds: 480.1 GB, 480103981056 bytes

    Disk /dev/sdt: 480.1 GB, 480103981056 bytes

    Disk /dev/sdu: 480.1 GB, 480103981056 bytes

    Disk /dev/sdv: 480.1 GB, 480103981056 bytes

    Disk /dev/sdw: 480.1 GB, 480103981056 bytes

    Disk /dev/sdx: 480.1 GB, 480103981056 bytes

                        


    Adaptec Driver Installation:

    Driver for Adaptec RAID controller is part of Mirantis Fuel Oerating system and will be available after OpenStack instalation completed . No additional steps required.

     

    Block Diagram - Storage Node

    The following diagram demonstrate the global overview of the storage node disk configuration (SSD + SATA) designed for OpenStack Cinder.

    Capture.2.JPG.jpg

     

    SSD Drives Configuration:

    For OpenStack Cinder volume, eventually all SSD drives will be grouped together (virtual group) and managed by the RAID-10 device mapper. The Cinder volume will see only one driver called /dev/md0, while the RAID will manage all 24 physical disks numbered from /dev/sda till /dev/sdx (See figure above).

    To setup the OpenStack Cinder volume ssd, follow the next steps:

     

      1. Build GPT partition table on all SSD drive and verify its set properly. Below is example for one drive.

    [root@node-7 ~]# parted /dev/sda

         GNU Parted 2.1

         Using /dev/sda

         Welcome to GNU Parted! Type 'help' to view a list of commands.

    (parted) mklabel gpt

         Warning: The existing disk label on /dev/sda will be destroyed and all data on this disk will be      lost. Do you want to continue?

         Yes/No? Y

    (parted) p

         Model: ATA INTEL SSDSC2BB48 (scsi)

         Disk /dev/sda: 480GB

         Sector size (logical/physical): 512B/512B

         Partition Table: gpt

           

     

      2. Create RAID-10 /dev/md0 device from all 24 SSD drives

    [root@node-7 ~]# mdadm -v --create /dev/md0 --name=0 --level=raid10 --raid-devices=24 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx

           

     

     

      4. Create /dev/md0 configuration file

      [root@node-7 ~]# mkdir /etc/mdadm

      [root@node-7 ~]# mdadm -Db /dev/md0 >> /etc/mdadm/mdadm.conf

                                    

     

      5. Verify that configuration succeeded.

    [root@node-7 ~]# cat /etc/mdadm/mdadm.conf

    ARRAY /dev/md0 metadata=1.2 name=node-8.domain.tld:0 UUID=c7af1a9a:800fe06e:f022a2e6:23e86c16

                                    

     

      6. Verify the status of RAID-10 build process.

    [root@node-7 ~]# cat /proc/mdstat

        md0 : active raid10 sdx[23] sdw[22] sdv[21] sdu[20] sdt[19] sds[18] sdr[17] sdq[16]
         sdp[15] sdo[14] sdn[13] sdm[12] sdl[11] sdk[10] sdj[9] sdi[8] sdh[7] sdg[6]

         sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda[0]

         5624641536 blocks super 1.2 512K chunks 2 near-copies [24/24] [UUUUUUUUUUUUUUUUUUUUUUUU]

         [>....................]  resync =  0.2% (14248576/5624641536) finish=467.5min speed=200001K/sec

                                    

    NOTE: /dev/md0 device if fully operational while resync still going but you will encounter degraded performance until
    completed

      7. Create a physical group from /dev/md0

    [root@node-7 ~]# pvcreate /dev/md0

                                    

     

    8. Create volume group (VG) for the Cinder SSD storage

    [root@node-7 ~]# vgcreate cinder-volumes-ssd /dev/md0

                                    

     

    At this stage, the cinder-volumes-ssd is created as a virtual group, and mapped to the /dev/md0 device (see the block diagram above).

     

    SATA Drives Configuration:

    Similar to the SSD drives, we need to aggregate all SATA drives to one cinder volume called cinder-volume-sata

    The SATA drives are already configured in the RAID-10 with the Adapter controller.

    In this example we use /dev/sdz for the volume.

     

    To setup the OpenStack Cinder volume SATA, follow the next steps:

      1. Create GPT partition table on /dev/sdz

    [root@node-7 ~]# parted /dev/sdz

         GNU Parted 2.1

         Using /dev/sdz

         Welcome to GNU Parted! Type 'help' to view a list of commands.

    (parted) mklabel gpt

         Warning: The existing disk label on /dev/sda will be destroyed and all data on this disk will be      lost. Do you want to continue?

         Yes/No? Y

    (parted) print

             Model: ASR7816 Adaptec (scsi)

          Disk /dev/sdz: 85.9TB

          Sector size (logical/physical): 512B/512B

          Partition Table: gpt

          Number  Start   End     Size    File system  Name  Flags

     

    (parted) quit

                                  

    2. Create a physical group

    [root@node-7 ~]# pvcreate /dev/sdz

                                  

    3. Create volume group (VG) for the Cinder SATA storage

    [root@node-7 ~]# vgcreate cinder-volumes-sata /dev/sdz

                                  

    At this stage, the cinder-volumes-sata is created as a virtual group, and mapped to the /dev/sdz device (see the block diagram above).

     

    Storage node - OpenStack Cinder Configuration

    At this point, we have two volumes (cinder-volume-sata, cinder-volume-ssd) ready to be attached to cinder.

    Follow the next steps to finalize OpenStack cinder configuration on Storage node.

    Configure two backend types on the storage node in ISER mode.

     

      1. Modify /etc/cinder/cinder.conf as below. Add reccords to the bottom of the file if do not exist

    #  a list of backends that will be served by this machine

    enabled_backends=cinder-volumes-sata,cinder-volumes-ssd

     

    [cinder-volumes-sata]

    volume_group=cinder-volumes-sata

    volume_driver=cinder.volume.drivers.lvm.LVMISERDriver

    volume_backend_name=iser-sata

     

    [cinder-volumes-ssd]

    volume_group=cinder-volumes-ssd

    volume_driver=cinder.volume.drivers.lvm.LVMISERDriver

    volume_backend_name=iser-ssd

                                  

     

      2. Configure IP addresses. Modify /etc/cinder/cinder.conf

              Ensure that my_ip, iscsi_ip_address and iser_ip_address are set to the IP of the storage network interface.
              In this example it is 192.168.1.9

    my_ip=192.168.1.9

    iscsi_ip_address=192.168.1.9

    iser_ip_address=192.168.1.9

                                  

     

      3. Restart TGT service

    [root@node-7 ~]# service tgtd restart

    Stopping SCSI target daemon:                               [  OK  ]

    Starting SCSI target daemon:                               [  OK  ]

                                  

     

      4. Restart Cinder service

    [root@node-7 ~]# service openstack-cinder-volume restart

    Stopping openstack-cinder-volume:                               [  OK  ]

    Starting openstack-cinder-volume:                               [  OK  ]

                                  

     

     

     

    Controller node - OpenStack Cinder Configuration

      1. SSH Access to Storage node (via Fuel):

          a. Login to the Fuel master via SSH (public network)

          b. Run fuel node command that prints all nodes

    [root@fuel ~]# fuel node

    id | status | name             | cluster | mac               | roles           | pending_roles | online

    ---|--------|------------------|---------|-------------------|-----------------|---------------|-------

    3  | ready  | Untitled (44:58) | 1       | 0e:22:72:85:70:49 | [u'compute']    | []            | True

    1  | ready  | Untitled (43:68) | 1       | 66:db:a9:37:a6:47 | [u'controller'] | []            | True

    2  | ready  | Untitled (43:F8) | 1       | 36:9d:e9:9a:f9:42 | [u'compute']    | []            | True

    6  | ready  | Untitled (6F:1C) | 1       | f6:cb:16:e2:fe:4f | [u'compute']    | []            | True

    5  | ready  | Untitled (44:40) | 1       | ae:a9:05:2e:b1:45 | [u'compute']    | []            | True

    7  | ready  | Untitled (DB:E2) | 1       | 3e:90:cf:fd:bf:42 | [u'cinder']     | []            | True

    4  | ready  | Untitled (44:84) | 1       | 86:9a:85:07:35:40 | [u'compute']    | []            | True

           

     


          c. Look for the Controller node id  bu role 'controller'

              Login from the fuel node to the storage node via thecommand ssh node-<id>

    [root@fuel ~]# ssh node-1

    [root@node-1 ~]#

           

     

      2. Create Cinder volume types

    [root@node-1 ~]# source ~/openrc

    [root@node-1 ~]# cinder type-create ssd

    [root@node-1 ~]# cinder type-create sata

    [root@node-1 ~]# cinder type-list

    +--------------------------------------+--------+

    |                   ID                 |  Name  |
    +--------------------------------------+--------+

    | 513e81f8-1370-405b-91b2-1cc8d61f410e |  ssd   |

    | cd3fea7c-a842-4d8f-a345-2b84c3c86391 |  sata  |

    +--------------------------------------+--------+

                                 

     

      3. Connect the cinder volume types to the proper cinder backends configured on Storage node

    [root@node-1 ~]# cinder type-key sata set volume_backend_name=iser-sata

    [root@node-1 ~]# cinder type-key ssd set volume_backend_name=iser-ssd

    [root@node-1 ~]# cinder extra-specs-list

    +--------------------------------------+---------+----------------------------------------+

    |           ID                         |   Name  |              extra_specs               |

    +--------------------------------------+---------+----------------------------------------+

    | 4d9c67e9-b636-4761-b11b-6d82e489ebb7 | vms_vol |                   {}                   |

    | 513e81f8-1370-405b-91b2-1cc8d61f410e |   ssd   | {u'volume_backend_name': u'iser-ssd'}  | 

    | cd3fea7c-a842-4d8f-a345-2b84c3c86391 |   sata  | {u'volume_backend_name': u'iser-sata'} |

    +--------------------------------------+---------+----------------------------------------+

                                 

     

      At this point, you can start using two volumes called iser-sata and iser-ssd via the OpenStack GUI.

     

    Controller node - Cinder Volume Verification:

    We'll verify that volumes of ssd and sata types may be created

      1. Connect to Controller node. See SSH Access to Storage node (via Fuel) above.

      2. Verify that ssd and sata cinder-volumes enabled and up

    [root@node-1 ~]# source ~/openrc

    [root@node-1 ~]# cinder service-list

    +------------------+---------------------------------------+------+---------+------+-----------------------+

    |      Binary      |                  Host                 | Zone |  Status | State|    Updated_at         |

    +------------------+---------- ----------------------------+------+---------+------+-----------------------+

    | cinder-scheduler |          node-1.domain.tld           | nova | enabled |   up | 2014-06-13:59:41.00000 |

    |  cinder-volume   |          node-1.domain.tld           | nova | enabled |   up | 2014-06-13:59:41.00000 |

    |  cinder-volume   |          node-7.domain.tld           | nova | enabled |  down| 2014-06-13:34:02.000000|

          
    |  cinder-volume   | node-7.domain.tld@cinder-volumes-sata| nova | enabled |   up | 2014-06-13:59:41.000000|
    |  cinder-volume   | node-7.domain.tld@cinder-volumes-ssd | nova | enabled |   up | 2014-06-13:59:41.000000| +------------------+--------------------------------------+------+---------+------+------------------------+

     

      3. Create SATA and SSD volumes

    [root@node-1 ~]# cinder create --volume_type ssd --display_name test.ssd 1

    +---------------------+--------------------------------------+

    |       Property      |                Value                 |

    +---------------------+--------------------------------------+

    |     attachments     |                  []                  |

    availability_zone  |                 nova                 |

    |       bootable      |                false                 |

    |      created_at     |     2014-03-31T11:45:46.170118       |

    | display_description |                 None                 |

    | display_name        |               test.ssd               |

    |          id         | 724de1ce-50c9-4385-be9e-4021bdf9f32f |

    |       metadata      |                  {}                  |

    |         size        |                  1                   |

    |     snapshot_id     |                 None                 |

    |     source_volid    |                 None                 |

    |        status       |               creating               |

    |     volume_type     |                 ssd                  |

    +---------------------+--------------------------------------+

    [root@node-1 ~]# cinder create --volume_type sata --display_name test.sata 1

    +---------------------+--------------------------------------+

    |       Property      |                Value                 |

    +---------------------+--------------------------------------+

    |     attachments     |                  []                  |

    |  availability_zone  |                 nova                 |

    |       bootable      |                false                 |

    |      created_at     |     2014-03-31T11:45:57.386321       |

    | display_description |                 None                 |

    |     display_name    |              test.sata               |

    |          id         | caaeab34-5672-4bdf-ba84-ab4ef1669dca |

    |       metadata      |                  {}                  |

    |         size        |                  1                   |

    |     snapshot_id     |                 None                 |

    |     source_volid    |                 None                 |

    |        status       |               creating               |

    |     volume_type     |                 sata                 |

    +---------------------+--------------------------------------+

                                

      4. Check the volume list

    [root@node-1 ~]# cinder list

    +----------------------------------+-----------+--------------+-----+-------------+----------+-------------+

    |                  ID              |   Status  | Display Name | Size| Volume Type | Bootable | Attached to |

    +----------------------------------+-----------+--------------+-----+-------------+----------+-------------+

    | 724de1ce-50c9-4385-be9e-40f9f32f | available | test.ssd     |  1  |    ssd      |  false   |             |

    | caaeab34-5672-4bdf-ba84-ab4efdca | available |  test.sata   |  1  |    sata     |  false   |             |

    +----------------------------------+-----------+--------------+-----+-------------+----------+-------------+

                                

      5. Delete the volumes

    [root@node-1 ~]# cinder delete test.ssd

    [root@node-1 ~]# cinder delete test.sata

                                

     

    Known Issues

    Issue#DescriptionWorkaround
    1

    pvcreate command may fail with the "Device /dev/sda5 not found (or ignored by filtering).” message

    For Example:

    # pvcreate /dev/sdz

    Device /dev/sdz not found (or ignored by filtering).

    • Then use fdisk to ensure that no partitions exists on physical drive. Delete all partitions found.
    • Try pvcreate again