Configuring the disk drives for services
Set up the drives that are required for your Cloud App Management server components. This example, provides the
steps to format and partition drives.
About this task
IBM® Cloud App Management requires six persistent volumes. A separate drive is recommended for Cassandra. The Cassandra drive handles most of the disk IO, and requires separate tuning to optimize IO. For more information, see Optimizing disk performance for Cassandra.
In this example, the system is provisioned
with:
2 TB drive /dev/sdc for Cassandra
500 GB drive /dev/sdd for the other five services (ZooKeeper, Kafka, CouchDB, Datalayer, and Elasticsearch)
Procedure
Complete these steps to format the blank drives using logical volumes:
-
Identify the disk:
fdisk -lThe output in this example shows that the system is provisioned with a 2000 GB/dev/sdcfor Cassandra and a 500 GB/dev/sddfor the other services:fdisk -l ... Disk /dev/sdc: 2148.6 GB, 2148557389824 bytes, 4196401152 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdd: 536.9 GB, 536870912000 bytes, 1048576000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes - Create physical volumes for each drive:
pvcreate path_to_new_volume -fIn this example, the commands create two new physical volume /sdc and /sdd as subdirectories of /dev:pvcreate /dev/sdc -f Physical volume "/dev/sdc" successfully created.pvcreate /dev/sdd -f Physical volume "/dev/sdd" successfully created. - Create a volume group for each physical volume:
vgcreate vg_name pv_pathIn this example, the command creates volume groups vg_sdc for the physical volume /dev/sdc and vg_sdd for the physical volume /dev/sdd:vgcreate vg_sdc /dev/sdc Volume group "vg_sdc" successfully created vgcreate vg_sdd /dev/sdd Volume group "vg_sdd" successfully created - Create a logical volume for Cassandra on volume group vg_sdc. Note the
volume group used:
lvcreate --name lv_name --size 1999G vg_name -y --readahead 8In this example, the command creates a logical volume that is named lv_cassandra, sized at1999Gin the volume group named vg_sdc. Using size2000Gmight result in error,Volume group "vg_sdc" has insufficient free space (511999 extents): 512000 required.lvcreate --name lv_cassandra --size 1999G vg_sdc -y --readahead 8 Logical volume "lv_cassandra" created.Create the other logical volumes for the other five services using the lvcreate command on the volume group vg_sdd:lvcreate --name lv_kafka --size 200G vg_sdd -ylvcreate --name lv_zookeeper --size 1G vg_sdd -ylvcreate --name lv_couchdb --size 50G vg_sdd -ylvcreate --name lv_datalayer --size 5G vg_sdd -ylvcreate --name lv_elasticsearch --size 1G vg_sdd -y - Format the new logical volumes using the
XFSformat:In this example, the command formats the lv_cassandra logical volume in the /dev/vg_sdc/ volume group in XFS format:mkfs.xfs -L cassandra /dev/vg_sdc/lv_cassandra
Format the remaining logical volumes for each of the services:meta-data=/dev/vg_sdc/lv_cassandra isize=512 agcount=4, agsize=131006464 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=524025856, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=255872, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0mkfs.xfs -L kafka /dev/vg_sdd/lv_kafkamkfs.xfs -L zk /dev/vg_sdd/lv_zookeepermkfs.xfs -L couchdb /dev/vg_sdd/lv_couchdbmkfs.xfs -L datal /dev/vg_sdd/lv_datalayermkfs.xfs -L elastic /dev/vg_sdd/lv_elasticsearch - Create the directories for each file system:
mkdir -p /k8s/data/cassandramkdir -p /k8s/data/kafkamkdir -p /k8s/data/zookeepermkdir -p /k8s/data/datalayermkdir -p /k8s/data/couchdbmkdir -p /k8s/data/elasticsearch - Add the new file system directories to
/etc/fstabecho "/dev/vg_sdc/lv_cassandra /k8s/data/cassandra xfs defaults 0 0" >> /etc/fstabecho "/dev/vg_sdd/lv_kafka /k8s/data/kafka xfs defaults 0 0" >> /etc/fstabecho "/dev/vg_sdd/lv_zookeeper /k8s/data/zookeeper xfs defaults 0 0" >> /etc/fstabecho "/dev/vg_sdd/lv_datalayer /k8s/data/datalayer xfs defaults 0 0" >> /etc/fstabecho "/dev/vg_sdd/lv_couchdb /k8s/data/couchdb xfs defaults 0 0" >> /etc/fstabecho "/dev/vg_sdd/lv_elasticsearch /k8s/data/elasticsearch xfs defaults 0 0" >> /etc/fstab - Mount the new file systems on the new directories:
mount /k8s/data/cassandramount /k8s/data/kafkamount /k8s/data/zookeepermount /k8s/data/datalayermount /k8s/data/couchdbmount /k8s/data/elasticsearch - Verify the mount point and readahead settings with the
lsblkcommand: lsblk --output NAME,KNAME,TYPE,MAJ:MIN,FSTYPE,SIZE,RA,MOUNTPOINT,LABELIn this example, the characteristics of the mount points are displayed:lsblk --output NAME,KNAME,TYPE,MAJ:MIN,FSTYPE,SIZE,RA,MOUNTPOINT,LABEL NAME KNAME TYPE MAJ:MIN FSTYPE SIZE RA MOUNTPOINT LABEL fd0 fd0 disk 2:0 4K 128 sda sda disk 8:0 80G 4096 ├─sda1 sda1 part 8:1 xfs 1G 4096 /boot └─sda2 sda2 part 8:2 LVM2_member 79G 4096 ├─rhel-root dm-0 lvm 253:0 xfs 75G 4096 / └─rhel-swap dm-1 lvm 253:1 swap 4G 4096 [SWAP] sdb sdb disk 8:16 xfs 100G 4096 /docker sdc sdc disk 8:32 LVM2_member 2T 128 └─vg_sdc-lv_cassandra dm-2 lvm 253:2 xfs 2T 4 /k8s/data/cassandra cassandra sdd sdd disk 8:48 LVM2_member 500G 128 ├─vg_sdd-lv_kafka dm-3 lvm 253:3 xfs 200G 128 /k8s/data/kafka kafka ├─vg_sdd-lv_zookeeper dm-4 lvm 253:4 xfs 1G 128 /k8s/data/zookeeper zk ├─vg_sdd-lv_couchdb dm-5 lvm 253:5 xfs 50G 128 /k8s/data/couchdb couchdb ├─vg_sdd-lv_datalayer dm-6 lvm 253:6 xfs 5G 128 /k8s/data/datalayer datal └─vg_sdd-lv_elasticsearch dm-7 lvm 253:7 xfs 1G 128 /k8s/data/elasticsearch elastic - To increase the size of a volume, use the
lvextend command, for example:
lvextend -L 20G /dev/mapper/vg_sdd-lv_couchdb - After you extend the volume, resize the xfs
directory, for example:
xfs_growfs /k8s/data/couchdbNote: The Kubernetes persistent volume definitions' "capacity" are not hard limits. The persistent volumes uses whatever storage is available inside the directory. However, after you increase capacity for a volume, it is recommended to modify the persistent volume capacity for consistency.