If you are running a hybrid system, and have on-premises Event Analytics installed as
part of your on-premises installation, then you can configure the system to display event groups
from your on-premises Event Analytics together with event groups from cloud native analytics.
About this task
Once set up the configuration works in the following way:
- Related event groups and patterns on the on-premises Event Analytics continue to
run against live event data and identify which events need to be correlated.
- Based on the correlations identified, triggers are run and store the runtime correlation
information in two new columns created within the ObjectServer.
- Runtime correlation data is sent across to the cloud native analytics cluster by means of a
gateway and is used to configure the ObjectServer values on the cluster.
- Two new scope based policies are loaded onto the cluster. These policies are triggered by
updates from the triggers. Once an update is received, the scope-based policies create scope-based
groupings based on the correlation data, and these scope-based groups are presented in the Alerts
page, as described in Displaying analytics details for an alert group.
The configuration is made up of the following components:
- Extra columns in the ObjectServer to store data related to the event groups discovered by
on-premises Event Analytics.
- Netcool®/OMNIbus
triggers that load the new columns with data related to the event groups discovered by on-premises
Event Analytics.
- The cloud native analytics cluster
gateway that reads the content of the columns and sends this data to the Cloud side of the hybrid
system, for display in the GUI, along with the event groups from cloud native analytics.
The configuration procedure itself is made up of two parts:
- Set up ObjectServer triggers to send correlation data to the cloud native analytics cluster
- Configure the relevant cloud native analytics cluster gateway to read
the correlation data
- Add two new scope-based policies to the cloud native analytics cluster to create
scope-based groupings based on the correlation data
Procedure
A. Set up ObjectServer triggers: this SQL introduces relevant triggers into the
ObjectServer
- Navigate to the following
directory:
$IMPACT_HOME/add-ons/RelatedEvents/db/
Where
$IMPACT_HOME is the
Netcool/Impact home
directory; for example,
/opt/IBM/tivoli/impact.
- Check that the ObjectServer trigger group
cea_triggers_gw
exists and is
enabled. This trigger group was created when your hybrid system was set up.
netcool/omnibus/bin/nco_sql -server OSSERVER -user OSUSER -password OSPASSWORD < ./relatedevents_objectserver_cea_scope_based_grouping_trigger_group_check.sql
Where:
OSSERVER
is the name of the ObjectServer instance.
OSUSER
is the userid of the ObjectServer administrator
user.
OSPASSWORD
is the userid of the ObjectServer administrator
password.
The following output shows that the trigger group is defined and
enabled.
GroupName IsEnabled
---------------------------------------- ---------
cea_triggers_gw 1
(1 row affected)
Warning:
If the trigger group cea_triggers_gw
does not exist or is disabled this
indicates either that you are not running a hybrid system, or that there is a serious issue with the
system; do not proceed any further until the situation is corrected. Check with the system
administrator that your system is part of a hybrid system.
-
Create new columns in the ObjectServer to store data related to the event groups discovered by
on-premises Event Analytics, by running
the following SQL update when the ObjectServers are running.
netcool/omnibus/bin/nco_sql -server OSSERVER -user OSUSER -password OSPASSWORD < ./relatedevents_objectserver_cea_scope_based_grouping.sql
- Add and activate the new triggers by running the following SQL update when the
ObjectServers are running.
netcool/omnibus/bin/nco_sql -server OSSERVER -user OSUSER -password OSPASSWORD < ./relatedevents_objectserver_cea_scope_based_grouping_triggers.sql
- Repeat steps 1 to 3 for the secondary ObjectServer. Ensure that you use appropriate
values for
OSSERVER
, OSUSER
,
and OSPASSWORD
.
B. Configure the cloud native analytics cluster gateway to read
the correlation data
- Log into your cluster. From a command window, run the
oc login
command and provide the OpenShift® Container Platform server URL (and optionally a token). For
example, without a token:
oc login https://Mycluster_hostName:8443
With a
token:
oc login --token=AbcdE1fgHijKLM2moP3q4rs5TU6vw7xYz --server=https://Mycluster_hostName:6443
- Save a backup copy of the cloud native analytics cluster gateway configmap
by issuing the following commands:
export EANOIGATEWAY=$(oc get configmap | grep eanoigateway | awk '{print $1}')
oc get configmap ${EANOIGATEWAY} -o yaml > ${EANOIGATEWAY}.bak
- Edit the gateway configmap.
oc edit configmap ${EANOIGATEWAY}
- Add the two new columns, to correspond to the new columns added to the ObjectServer in
step 3. Add the two columns after the entry for
scopeID
. Before you make
the change, the relevant section of the config map looks like
this:
\t =\t'@ServerSerial',\n 'CEAEventScore' = '@CEAEventScore',\n 'CEAEventClassification'
= '@CEAEventClassification',\n 'ScopeID' = \t'@ScopeID'\n);"
After
you make the change, that part of the config map should now look like this. The new code to add is
marked in
boldface.
\t =\t'@ServerSerial',\n 'CEAEventScore' = '@CEAEventScore',\n 'CEAEventClassification'
= '@CEAEventClassification',\n 'ScopeID' = \t'@ScopeID',\n 'CEAImpactREGroupScopeId'
= \t'@CEAImpactREGroupScopeId',\n 'CEAImpactPatternScopeId'
= \t'@CEAImpactPatternScopeId'\n);"
- Restart the gateway to put the changes into effect. Do this by deleting the associated
gateway pod; a new gateway pod will automatically start up using the new gateway map
parameters.
- First determine the name of the existing gateway pod. Do this by running the following
command to store gateway pod information in an environment
variable.
export EANOIGATEWAY_POD=$(oc get pod | grep ${EANOIGATEWAY})
Then
display information about the pod.
echo ${EANOIGATEWAY_POD}
This produces a
result similar to the
following:
cluster_release_name-hybrid-ea-noi-layer-eanoigateway-unique_pod_reference 1/1 Running 0 9d
Where:
cluster_release_name
is the release name of the
cluster.
unique_pod_reference
is a combination of characters and
numbers that uniquely identifies the pod.
- 9d in the preceding example shows the pod has been running for 9 days. This will vary depending
on the circumstances of the pod.
For
example:
MyRelease-hybrid-ea-noi-layer-eanoigateway-6b8fcffdc4-5q2rm 1/1 Running 0 9d
- Delete the pod by running the following command:
oc delete pod cluster_release_name-hybrid-ea-noi-layer-eanoigateway-unique_pod_reference
For
example:
oc delete pod MyRelease-hybrid-ea-noi-layer-eanoigateway-6b8fcffdc4-5q2rm
- Wait a short time and then run the following command to check that the new pod has
restarted.
oc get pod | grep ${EANOIGATEWAY}
You should see a result similar to the following. In this example response, the new pod is shown
as having been running for 30
seconds.
cluster_release_name-hybrid-ea-noi-layer-eanoigateway-unique_pod_reference 1/1 Running 0 30s
C. Add two new scope-based policies to the cloud native analytics cluster
- First, for each scope-based based policy copy the policy code from the following code
blocks into a JSON file.
ceaimpactregroupscopeid_policy.json
- This code creates a scope-based group within the cloud native analytics system based on the
correlations passed across by the triggers from the related event group configurations on the
on-premises Event Analytics
side.
{
"type": "correlation",
"groupid": "scope",
"dynamic": true,
"configuration": {
"deployed": true,
"properties": [
"/details/CEAImpactREGroupScopeId"
]
},
"metadata": {
"model": {
"analytic": "scope",
"type": "analytic"
}
},
"resolver": {
"stub": "com.ibm.itsm.inference.resolver.ScopeResolver",
"version": "1.0.1"
}
}
ceaimpactpatternscopeid_policy.json
- This code creates a scope-based group within the cloud native analytics system based on the
correlations passed across by the triggers from the pattern processing on the on-premises Event Analytics
side.
{
"type": "correlation",
"groupid": "scope",
"dynamic": true,
"configuration": {
"deployed": true,
"properties": [
"/details/CEAImpactPatternScopeId"
]
},
"metadata": {
"model": {
"analytic": "scope",
"type": "analytic"
}
},
"resolver": {
"stub": "com.ibm.itsm.inference.resolver.ScopeResolver",
"version": "1.0.1"
}
}
- Save the JSON files in a convenient location on your filesystem.
For example,
create a temporary directory tmp
, and then create a json
directory
inside of it. Save the two files in tmp/json.
- Create two environment variables:
ADMIN_PASSWORD, and POLICY_URL, by running
the following commands:
- ADMIN_PASSWORD
export ADMIN_PASSWORD=$(oc get secret cluster_release_name-systemauth-secret -o jsonpath --template '{.data.password}' | base64 --decode)
Where
cluster_release_name
is the release name of the cluster.
- POLICY_URL
-
export POLICY_URL=$(echo "https://$(oc get route | grep policies | awk '{print $2}')$(oc get route | grep policies | awk '{print $3}')"/system)/v1/cfd95b7e-3bc7-4006-a4a8-a73a79c71255/policies/system
- Check that the two environment variables have been assigned values.
env | grep ADMIN_PASSWORD
env | grep POLICY_URL
- Navigate to the
tmp
directory.
- For each JSON file, issue a POST request to apply the policies to the policy registry.
This will install them into the system.
curl -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --user system:${ADMIN_PASSWORD} -T ./json/ceaimpactregroupscopeid_policy.json -vk ${POLICY_URL}
curl -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --user system:${ADMIN_PASSWORD} -T ./json/ceaimpactpatternscopeid_policy.json -vk ${POLICY_URL}
RTC work item 70825 https://rtp-rtc10.tivlab.raleigh.ibm.com:9443/jazz/web/projects/BSM#action=com.ibm.team.workitem.viewWorkItem%26id=70825