Configuring the disk drives for services

Set up the drives that are required for your Cloud App Management server components.

IBM® Cloud App Management requires 5 persistent volumes. For performance and scalability, we recommend using local storage. The steps below provide examples of how to format drives and partition them for use. It is recommended to use a separate drive for Cassandra. This drive will handle the majority of the disk IO, as well as require separate tuning to optimize IO (see Disk Performance Optimization For Cassandra). For this example, our system has been provisioned with a 2TB drive /dev/sdc for Cassandra and a 500GB drive /dev/sdd for the other 4 services (Zookeeper, Kafka, CouchDB, and Datalayer) volumes.

Procedure

Complete these steps to format the blank drives using logical volumes:

  1. Identify the disk: fdisk -l
    The output in this example shows that the system has been provisioned with a 2000 GB /dev/sdc for Cassandra and a 500 GB /dev/sdd for the other services:
    fdisk -l
    ...
    Disk /dev/sdc: 2148.6 GB, 2148557389824 bytes, 4196401152 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    Disk /dev/sdd: 536.9 GB, 536870912000 bytes, 1048576000 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
  2. Create physical volumes for each drive: pvcreate path_to_new_volume -f
    In this example, the commands create 2 new physical volume /sdc and /sdd as subdirectories of /dev:
    pvcreate /dev/sdc -f
      Physical volume "/dev/sdc" successfully created.
    pvcreate /dev/sdd -f
      Physical volume "/dev/sdd" successfully created.
  3. Create a volume group for each physical volume: vgcreate vg_name pv_path
    In this example, the command creates volume groups vg_sdc for the physical volume /dev/sdc and vg_sdd for the physical volume /dev/sdd:
    vgcreate vg_sdc /dev/sdc
      Volume group "vg_sdc" successfully created
    vgcreate vg_sdd /dev/sdd
      Volume group "vg_sdd" successfully created
  4. Create a logical volume for Cassandra on volume group vg_sdc. Note the volume group used: lvcreate --name lv_name --size 1999G vg_name -y --readahead 8
    In this example, the command creates a logical volume named lv_cassandra, sized at 1999G in the volume group named vg_sdc. Note that using size 2000G could result in error, Volume group "vg_sdc" has insufficient free space (511999 extents): 512000 required.
    lvcreate --name lv_cassandra  --size 1999G vg_sdc -y --readahead 8
      Logical volume "lv_cassandra" created.
    Create the other logical volumes for the other 4 services using the lvcreate command on the volume group vg_sdd:
    lvcreate --name lv_kafka      --size 200G  vg_sdd -y
    lvcreate --name lv_zookeeper  --size 1G    vg_sdd -y
    lvcreate --name lv_couchdb    --size 50G   vg_sdd -y
    lvcreate --name lv_datalayer  --size 5G    vg_sdd -y
  5. Format the new logical volumes using the XFS format:
    In this example, the command formats the lv_cassandra logical volume in the /dev/vg_sdc/ volume group in XFS format:
    mkfs.xfs -L cassandra /dev/vg_sdc/lv_cassandra
    meta-data=/dev/vg_sdc/lv_cassandra isize=512    agcount=4, agsize=131006464 blks
             =                       sectsz=512   attr=2, projid32bit=1
             =                       crc=1        finobt=0, sparse=0
    data     =                       bsize=4096   blocks=524025856, imaxpct=5
             =                       sunit=0      swidth=0 blks
    naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
    log      =internal log           bsize=4096   blocks=255872, version=2
             =                       sectsz=512   sunit=0 blks, lazy-count=1
    realtime =none                   extsz=4096   blocks=0, rtextents=0
    Format the remaining logical volumes for each of the services:
    mkfs.xfs -L kafka     /dev/vg_sdd/lv_kafka
    mkfs.xfs -L zk        /dev/vg_sdd/lv_zookeeper
    mkfs.xfs -L couchdb   /dev/vg_sdd/lv_couchdb
    mkfs.xfs -L datal     /dev/vg_sdd/lv_datalayer
  6. Create the directories for each filesystem:
    mkdir -p /k8s/data/cassandra
    mkdir -p /k8s/data/kafka
    mkdir -p /k8s/data/zookeeper
    mkdir -p /k8s/data/datalayer
    mkdir -p /k8s/data/couchdb
  7. Add the new filesystem directories to /etc/fstab
    echo "/dev/vg_sdc/lv_cassandra  /k8s/data/cassandra     xfs    defaults        0 0" >> /etc/fstab
    echo "/dev/vg_sdd/lv_kafka      /k8s/data/kafka         xfs    defaults        0 0" >> /etc/fstab
    echo "/dev/vg_sdd/lv_zookeeper  /k8s/data/zookeeper     xfs    defaults        0 0" >> /etc/fstab
    echo "/dev/vg_sdd/lv_datalayer  /k8s/data/datalayer     xfs    defaults        0 0" >> /etc/fstab
    echo "/dev/vg_sdd/lv_couchdb    /k8s/data/couchdb       xfs    defaults        0 0" >> /etc/fstab
  8. Mount the new filesystems on the new directories:
    mount /k8s/data/cassandra
    mount /k8s/data/kafka
    mount /k8s/data/zookeeper
    mount /k8s/data/datalayer
    mount /k8s/data/couchdb
  9. Verify the mount point and readahead settings with the lsblk command: lsblk --output NAME,KNAME,TYPE,MAJ:MIN,FSTYPE,SIZE,RA,MOUNTPOINT,LABEL
    In this example, the characteristics of the mount points are displayed:
    lsblk --output NAME,KNAME,TYPE,MAJ:MIN,FSTYPE,SIZE,RA,MOUNTPOINT,LABEL
    NAME                  KNAME TYPE MAJ:MIN FSTYPE       SIZE   RA MOUNTPOINT          LABEL
    fd0                   fd0   disk   2:0                  4K  128
    sda                   sda   disk   8:0                 80G 4096
    ├─sda1                sda1  part   8:1   xfs            1G 4096 /boot
    └─sda2                sda2  part   8:2   LVM2_member   79G 4096
      ├─rhel-root         dm-0  lvm  253:0   xfs           75G 4096 /
      └─rhel-swap         dm-1  lvm  253:1   swap           4G 4096 [SWAP]
    sdb                   sdb   disk   8:16  xfs          100G 4096 /docker
    sdc                   sdc   disk   8:32  LVM2_member    2T  128
    └─vg_sdc-lv_cassandra dm-2  lvm  253:2   xfs            2T    4 /k8s/data/cassandra cassandra
    sdd                   sdd   disk   8:48  LVM2_member  500G  128
    ├─vg_sdd-lv_kafka     dm-3  lvm  253:3   xfs          200G  128 /k8s/data/kafka     kafka
    ├─vg_sdd-lv_zookeeper dm-4  lvm  253:4   xfs            1G  128 /k8s/data/zookeeper zk
    ├─vg_sdd-lv_couchdb   dm-5  lvm  253:5   xfs           50G  128 /k8s/data/couchdb   couchdb
    └─vg_sdd-lv_datalayer dm-6  lvm  253:6   xfs            5G  128 /k8s/data/datalayer datal
    
  10. To increase the size of a volume, use the lvextend command, for example
    lvextend -L 20G /dev/mapper/vg_sdd-lv_couchdb
  11. After extending the volume, resize the xfs directory, for example:
    xfs_growfs /k8s/data/couchdb
    Note: The kubernetes persistent volume definitions' "capacity" are not hard limits. The persistent volumes will use whatever storage is available to them inside the directory. However, after increasing a volume's capacity, it is recommended to modify the persistent volume's capacity for consistency.