New in
7.4.2 If the QRadar 7.4.2
upgrade detects stand-alone or clustered event collectors with GlusterFS in your deployment, the
upgrade fails. You must run a migration script separately on QRadar 7.3.2 Fix Pack 3 or later before
you upgrade to QRadar 7.4.2. If your event collectors are deployed on QRadar 7.1 or earlier and then
upgraded to a later version, you must upgrade the file systems table (fstab) before you migrate
GlusterFS to Distributed Replicated Block Device.
Before you begin
Ensure that terminals are closed within the /store partition on the event
collectors before you run the script.
About this task
You can migrate the event collectors from GlusterFS to Distributed Replicated Block Device
without upgrading to QRadar 7.4.2. However, your event collectors must be migrated to Distributed
Replicated Block Device if you upgrade to QRadar 7.4.2 or later. The migration can be only started
from the QRadar Console and runs sequentially on several event collectors. A backup check runs to
ensure that enough space is available to back up the /store partition.Important: If you have a large /store partition, for example 50 TB, creating the
high-availability Distributed Replicated Block Device might take a few days to complete. You must
wait until the synchronization completes before you upgrade QRadar.
Deployments of QRadar
7.1 or earlier still have an ext4 /store partition after you upgrade to later
versions. The script to convert GusterFS to Distributed Replicated Block Device doesn't convert
/store from ext4 to xfs. You must upgrade the file systems table (fstab) before you migrate
GlusterFS to Distributed Replicated Block Device.
Procedure
- If you are upgrading from a QRadar version that still has an ext4
/store partition, follow these steps:
- Run the blkid command on the device to obtain a new
UUID.
For example, #blkid /dev/sda8
.
The result might look
similar to this example:
/dev/sda8: UUID="9f7b2450-0873-45b9-be9f-dcc3f534acf2" TYPE="xfs" PARTLABEL="/store" PARTUUID="5b9b130a-76bc-449d-ac6d-76eab755d6df"
- Open the file /etc/fstab in a text editor and locate the line
that is similar to the following example:
UUID=102a8849-1d93-4650-9de2-59a8ce0e8a77 /store ext4 defaults 1 2
- Edit the line to make the following changes and then save the file.
Table 1. UUID and parameters
Parameter |
Value |
UUID |
New UUID: generated by the blkid command |
Type |
xfs |
Backup operation value |
0 |
File system check order |
0 |
For
example,
UUID=9f7b2450-0873-45b9-be9f-dcc3f534acf2 /store xfs defaults 0 0
- Mount the /store partition manually by running the mount
-a command.
- Download the latest version of the migration script from the Script section of Fix Central
(https://www.ibm.com/support/fixcentral/swg/selectFixes?parent=IBM%20Security&product=ibm/Other+software/IBM+Security+QRadar+SIEM&release=7.4.0&platform=Linux&function=all).
- Copy the migration script that you downloaded to the QRadar Console by typing the
following command:
scp <filename> <user>@<IP_address>:/opt/qradar/ha/bin/<filename>
Important: This script must be copied to the directory
/opt/qradar/ha/bin or it doesn't run.
- In the directory
/opt/qradar/ha/bin/
, enter the following command to set
the permissions on the script.
chmod +x glusterfs_migration_manager-<script_version>.bin
- To verify the script, run the following command:
ls -ltrh glusterfs_migration_manager-<script_version>.bin
The result might look similar to this example:
-rwxr-xr-x 1 root root 9.8M Feb 9 12:14 glusterfs_migration_manager-<script_version>.bin
- For all versions of QRadar, run the migration script from the
QRadar Console by typing the following command:
/opt/qradar/ha/bin/glusterfs_migration_manager-<script_version>.bin -m
Important: If you get an error that there is not enough storage space
during migration from GlusterFS to Distributed Replication Block Device, do not point to the
/store directory. Pointing to the
/store directory
interferes with the stability of the system. For more information, see
https://www.ibm.com/support/pages/node/6413281
(https://www.ibm.com/support/pages/node/6413281).
The following table describes the migration
parameters that you can use in the command.
Table 2. GlusterFS migration parameters
Parameters |
Description |
-h |
Shows the help information for GlusterFS migration. |
-p |
Copies this executable file and runs the precheck on all hosts that might require a
migration. |
-m |
Starts the migration process on all applicable hosts. By default the /storetmp/backup
partition is used to back up the /store partition but you can provide a different backup partition
with the migrate option.
|
-s |
Provides details about the migration status of applicable hosts in the deployment. |
--debug |
Runs with another option to enable debug output. |
The time to complete the migration of a single HA event collector host is
approximately 20 - 25 minutes. The time depends on how much data is backed up before the
/store partition is wiped to make space for Distributed Replicated Block
Device.
Results
All services are stopped on the event collectors during migration from GlusterFS to
Distributed Replicated Block Device. After the event collectors are migrated, the event collector
works the same way as any other host that uses Distributed Replicated Block Device.