IBM Support

QRadar: All hosts in your deployment must be at the same version

Troubleshooting


Problem

The QRadar console and all managed hosts in your deployment must be on the same software version to avoid replication issues, deployment issues, and many other negative side effects.

Symptom

You might find that deployments and replication are failing due to a managed host being at a different software version than the console. If so, there should be similar messages on the console in the qradar.log file, indicating that replication is failing:
Jul 28 12:22:17 hostname-con replication[17372]: Version mismatch.  Console is at version 7.3.2.20190705120852 and managedhost is at version 7.3.1.
Jul 28 12:22:17 hostname-con replication[17372]: Not providing dumps for host 198.51.100.50.

For deployments, you should see a message similar to the following, when the deployment times out:

Jul 28 12:20:16 ::ffff:198.51.100.50 [hostcontext.hostcontext] [ConfigChangeObserver Timer[1]] com.q1labs.hostcontext.configuration.ConfigSetUpdater: [ERROR] [NOT:0000003000][198.51.100.50/- -] [-/- -]Failed to download and process global set 
Jul 28 12:20:16 ::ffff:198.51.100.50 [hostcontext.hostcontext] [ConfigChangeObserver Timer[1]] com.q1labs.hostcontext.exception.HostContextConfigException: Failed to download new configuration set

Cause

The QRadar console is responsible for replicating its database and also pushing deployment configuration (via the Deploy Changes) to the managed hosts in the deployment. When QRadar versions change, there can be configuration template changes and database schema changes. As a result, deployments fail and database replication fails, due to the various changes between versions.

Diagnosing The Problem

To verify that all hosts in the deployment are at the same level, there are a couple of easy ways to accomplish this. The first tool is the deployment_info.sh script. Run this with the -OH flags to get an output showing the Build version. An example of this output is below:
[root@hostname-con ~]# /opt/qradar/support/deployment_info.sh -OH
INFO: Gathering deployment information. This may take a while...
Hostname              IP             HA Status  Appliance   Hardware                 Build                 Serial #                                                CPUs                                          Disks
hostname-apphost      198.51.100.20  N/A        4000        VMware Virtual Platform  7.3.2.20190705120852  VMware-42 24 79 51 37 75 7d d1-61 e7 d3 3f 03 30 f6 29  4 x Intel(R) Xeon(R) CPU E5-4650 0 @ 2.70GHz  0 x 
hostname-con-primary  198.51.100.10  active     3199        VMware Virtual Platform  7.3.2.20190705120852  VMware-42 24 ed b2 b0 59 fd e2-10 bd 93 f0 0d 6f 93 97  8 x Intel(R) Xeon(R) CPU E5-4650 0 @ 2.70GHz  0 x 
hostname-ep-primary   198.51.100.50  active     1699        VMware Virtual Platform  7.3.1.20180507202600  VMware-42 24 88 3c b6 52 62 4e-cf 8b a8 33 3d a0 83 50  4 x Intel(R) Xeon(R) CPU E5-4650 0 @ 2.70GHz  0 x 

Here you can easily see that the console is running 7.3.2.20190705120852, while the EP is running an older 7.3.1 version.

The only pitfall with using the deployment_info.sh script is that if you have many HA hosts in your deployment, it only queries the active node. If you want to see all the hosts in the deployment, including secondary hosts that are in standby, you can use the all_servers.sh script (use -C to include the console and -k to include all standby systems) with the myver utility: /opt/qradar/support/all_servers.sh -Ck "/opt/qradar/bin/myver -tf". An example of this output is below:

[root@hostname-732con-primary ~]# /opt/qradar/support/all_servers.sh -Ck "/opt/qradar/bin/myver -tf"
198.51.100.11 -> hostname-con-primary.example.com
Appliance Type: 3199 Product Version: 7.3.2.20190705120852
 14:45:52 up 2 days, 15:48,  1 user,  load average: 1.49, 1.13, 1.00
------------------------------------------------------------------------
7.3.2.20190705120852

198.51.100.12 -> hostname-con-secondary.example.com
Appliance Type: 500 Product Version: 7.3.2.20190705120852
 14:45:52 up 2 days, 15:49,  0 users,  load average: 0.08, 0.18, 0.18
------------------------------------------------------------------------
7.3.2.20190705120852

198.51.100.20 -> hostname-apphost.example.com
Appliance Type: 4000 Product Version: 7.3.2.20190705120852
 14:45:53 up 1 day,  2:25,  0 users,  load average: 0.52, 0.62, 0.60
------------------------------------------------------------------------
7.3.2.20190705120852

198.51.100.51 -> hostname-ep-primary.example.com
Appliance Type: 1699 Product Version: 7.3.1.20180507202600
 14:45:53 up 22:31,  0 users,  load average: 1.45, 0.94, 0.60
------------------------------------------------------------------------
7.3.1.20180507202600

198.51.100.52 -> hostname-ep-secondary.example.com
Appliance Type: 500 Product Version: 7.3.1.20180507202600
 14:45:53 up 23:19,  0 users,  load average: 0.03, 0.10, 0.10
------------------------------------------------------------------------
7.3.1.20180507202600
From this output, you can see each host in the HA clusters throughout the deployment. 

Resolving The Problem

Important: Running mixed software versions in your deployment is unsupported and can have adverse affects to the environment.
You must have all hosts in your deployment at the same software version and patch level. Use the utilities under Diagnosing the Problem to help identify any hosts running a different version, and ensure you get the host patched to the same version as the console.

Where do I find more information?


Document Location

Worldwide

[{"Business Unit":{"code":"BU008","label":"Security"},"Product":{"code":"SSBQAC","label":"IBM QRadar SIEM"},"Component":"Deployment","Platform":[{"code":"PF043","label":"Red Hat"}],"Version":"All Versions","Edition":""}]

Document Information

Modified date:
07 August 2019

UID

ibm10960936