Preparing your nodes for a reinstallation of GlusterFS or IBM Cloud Private
If you configured GlusterFS in your cluster, and you want to reinstall GlusterFS or IBM® Cloud Private on the same cluster, you must first prepare your nodes for the reinstallation.
Delete the Helm chart
-
Ensure that Helm CLI is set up. For more information, see Installing the Helm CLI (helm).
-
Get the release name.
helm list --tls | grep gluster
-
Delete the chart.
helm delete --purge <release_name> --tls
Delete the backup Heketi database
-
Ensure that kubectl CLI is set up. For more information, see Accessing your cluster from the Kubernetes CLI (kubectl).
-
Get the Heketi secret.
kubectl -n kube-system get secret | grep heketi
-
Delete backup Heketi database.
kubectl -n kube-system delete secret <heketi-db-backup-name>
Remove the configuration directories
Remove the Heketi and Gluster daemon configuration directories from each storage node that is used for reinstallation. Run these commands:
rm -rf /var/lib/heketi
rm -rf /var/lib/glusterd
rm -rf /var/log/glusterfs
Prepare the disks to be used for GlusterFS installation
You can reuse the disks or add new disks for a reinstallation of GlusterFS.
-
If you are adding new disks, see Hardware requirements for information about the disk requirements. After you add the disks, you must restart the nodes so that the system identifies the disks.
-
If you are reusing the disks, complete these steps:
Note: The disk cleanup process might not work in some environments. If that happens, you might need to use fresh disks.
-
Back up the data on the disks that were used in an earlier installation. The steps that follow might cause a loss of data on the old disks.
-
Run these commands to remove the GlusterFS volumes:
-
Remove the logical volumes and volume group.
lvscan | grep 'vg_' | awk '{print $2}' | xargs -n 1 lvremove -y vgscan | grep 'vg_' | awk '{print $4}' | xargs -n 1 vgremove -y
-
Scan for the physical volumes. Make a note of the physical volume name. The physical volume name is the string after PV.
pvscan
Following is a sample output of the command:
PV /dev/sdb VG vg_7a08667f373f2d6fe9977de3b29a754e lvm2 [249.87 GiB / 249.87 GiB free] PV /dev/sda5 VG ubuntuicp-vg lvm2 [299.52 GiB / 8.00 MiB free] Total: 2 [549.39 GiB] / in use: 2 [549.39 GiB] / in no VG: 0 [0 ][image]
For example, in the sample output,
/dev/sdb
is the physical volume name. -
Remove all physical volumes.
pvremove <physical_volume_name>
-
Get the device name for the disk that is used for GlusterFS. The device name is the string after Disk.
fdisk -l
Following is a sample output of the command:
Disk /dev/sda: 300 GiB, 322122547200 bytes, 629145600 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xfa3b0bf6 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 999423 997376 487M 83 Linux /dev/sda2 1001470 629143551 628142082 299.5G 5 Extended /dev/sda5 1001472 629143551 628142080 299.5G 8e Linux LVM Disk /dev/sdb: 250 GiB, 268435456000 bytes, 524288000 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/ubuntuicp--vg-root: 298.6 GiB, 320574849024 bytes, 626122752 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/ubuntuicp--vg-swap_1: 976 MiB, 1023410176 bytes, 1998848 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
For example, in the sample output,
/dev/sdb
is the disk name. -
Erase all file system, raid, and partition-table signatures.
wipefs --all --force <device_name>
-
-
Next, complete the tasks in Configuring GlusterFS after IBM Cloud Private installation.