Training with topology and event data
To learn about cloud native analytics, you can install topology and event data sets. Learn how to install and load topology and event data, train the system and see the results.
Before you begin
- Ensure that topology management is enabled and deployed on the cloud.
- Ensure that the file observer under the topology section of the deployment configuration.
- Ensure that the Kube observer is desired and recommended under the topology section on the deployment configuration.
About this task
Procedure
- Load the topology sample data: you need to generate an
instance of the event-tooling pod to load demo topology data. You need to provide the topology
system username, password, and external API routes to the container. To configure and generate this
pod:
- Obtain the topology base route, system username, and password by running the
command:
export ASM_USERNAME=$(kubectl get secret $(kubectl get secret | grep topology-asm-credentials | awk '{ print $1 }') -o jsonpath --template '{.data.username}' | base64 --decode) export ASM_PASSWORD=$(kubectl get secret $(kubectl get secret | grep topology-asm-credentials | awk '{ print $1 }') -o jsonpath --template '{.data.password}' | base64 --decode) export ASM_HOSTNAME="https://$(oc get routes | grep topology | awk '{ print $2 }' | awk 'NR==1{print $1}')" export ASM_IMAGE=$(kubectl get noi -o yaml | grep ea-events-tooling | grep "image=" | sed 's/\\//g')
- Generate the events tooling pod by running the
loadTopologyData.sh
script. The usage of the script is as follows:
Where the parameters/app/bin/loadTopologyData.sh Description: Load data into the topology including resources and templates that correspond To the loadSampleData.sh script. Note that this script requires a file observer be active in the asm system. Usage: /app/bin/loadTopologyData.sh -l <hostname> -u <asm username> -p <asm password> Required Parameters: -l : The ASM core route or hostname including the protocol definition. Eg: /app/bin/loadTopologyData.sh -l https://my-env-topology.namespace.apps.clustername.os.clusterdomain.com -u : The asm system user name, typically found in the secret releasename-topology-asm-credentials /app/bin/loadTopologyData.sh -l ... -u asm -p asm -p : The asm system user password, typically found in the secret releasename-topology-asm-credentials /app/bin/loadTopologyData.sh -l ... -u asm -p asm
-l
,-u
, and-p
correspond to the values currently in the environment variables$ASM_USERNAME
,$ASM_PASSWORD
, and$ASM_HOSTNAME
. You can construct the events tooling pod with these environment variables, and run the script by using these environment variables as arguments. The image must be set for yourea-events-tooling
image from your registry. To get the correct image name, run the following commands:
Cloud deployment:kubectl get noi -o yaml | grep ea-events-tooling | grep "image=" | sed 's/\\//g'
Hybrid deployment:kubectl get noihybrid -o yaml | grep ea-events-tooling | grep "image=" | sed 's/\\//g'
IBM® Netcool® for AIOps deployment:
Where <noi-operator> is the noi-operator CSV file name.oc get csv <noi-operator> -o yaml | grep olm.relatedImage.NOI_ea-events-tooling: | awk -F ': ' '{print $2}'
- Run the script by using the following
command:
kubectl run load-topology-sample-data \ -it --restart=Never --env=LICENSE=accept --command=true \ --env=ASM_USERNAME=$ASM_USERNAME \ --env=ASM_PASSWORD=$ASM_PASSWORD \ --env=ASM_HOSTNAME=$ASM_HOSTNAME \ $ASM_IMAGE -- ./entrypoint.sh loadTopologyData.sh -l $ASM_HOSTNAME -p $ASM_PASSWORD -u $ASM_USERNAME
- Override the script to use the registry secret to pull the image. Set the overrides section to
contain the correct
imagePullSecret
as follows:
The script will:kubectl run load-topology-sample-data \ -it --restart=Never --env=LICENSE=accept --command=true \ --env=ASM_USERNAME=$ASM_USERNAME \ --env=ASM_PASSWORD=$ASM_PASSWORD \ --env=ASM_HOSTNAME=$ASM_HOSTNAME \ --overrides='{ "apiVersion": "v1", "spec": { "imagePullSecrets": [{"name": "noi-registry-secret"}] } }' \ $ASM_IMAGE -- ./entrypoint.sh loadTopologyData.sh -l $ASM_HOSTNAME -p $ASM_PASSWORD -u $ASM_USERNAME
- Upload sample topology sample data through the File Observer REST API called
topologySampleData
. - Generate a file observer job to upload that data into the topology system.
- Generate a dynamic
Service to Host
sample topology template, which will be used to demonstrate probable root cause and topology correlation based on the events that are uploaded in the next stage.
- Upload sample topology sample data through the File Observer REST API called
- Obtain the topology base route, system username, and password by running the
command:
- Load sample events: upload historical events to the EA ingestion
service, run training for seasonality and temporal patterns analytics and insert corresponding live
events to display the analytics in the Alert Viewer.
- Configure the environment variables that the
ea-events-tooling
pod uses. Update thereleasename
value and run the command:
This command sets the following environment variables that will be used as arguments to the next step:export RELEASE_NAME=yourreleasename export HTTP_USERNAME=$(kubectl get secret "$RELEASE_NAME-systemauth-secret" -o jsonpath --template '{.data.username}' | base64 --decode) export HTTP_PASSWORD=$(kubectl get secret "$RELEASE_NAME-systemauth-secret" -o jsonpath --template '{.data.password}' | base64 --decode) export ADMIN_PASSWORD=$(kubectl get secret "$RELEASE_NAME-systemauth-secret" -o jsonpath --template '{.data.password}' | base64 --decode) export OMNIBUS_OS_PASSWORD=$(kubectl get secret "$RELEASE_NAME-omni-secret" -o jsonpath --template '{.data.OMNIBUS_ROOT_PASSWORD}' | base64 --decode)
HTTP_USERNAME
,HTTP_PASSWORD
,ADMIN_PASSWORD
,OMNIBUS_OS_PASSWORD
. - Run the
loadSampleData.sh
script to start a new pod by using theea-events-tooling
image that loads the events. This uses the sameloadSampleData
script as is usually used to load sample events. An extra argument must be supplied to load live events that correspond to the topology sample data. The usage of the script is:
You need to supply the argument/app/bin/loadSampleData.sh -r <releasename> [-t tenant ID] [-h] [-j] [-k] [-o primaryOSHostname] \ [-p primaryOSPort] [-x primaryOSUsername] [-z primaryOSPassword] \ [-s dockerRegistrySecret] \ [-a serviceAccountName] [-s dockerRegistrySecret] \ [-e secretrelease] [-i sourceids] [-d] Where: Training is run on both the related-events and seasonal-events algorithms The tenantid option controls the tenantid that data is trained against. It should only be specified if the tenantid has been changed from the derived default in the noiusers credentials section of the values.yaml. The default tenantid is 'cfd95b7e-3bc7-4006-a4a8-a73a79c71255' if not specified -j generates YAML to define a job to invoke this script instead of running it directly If specifying -j, set --env=CONTAINER_IMAGE= to the same value you specified for --image in the kubectl run command so that the image location is correctly specifed in the job. -k stops all training for patterns / temporal grouping -d enables live events suitable for the topology based data scenario to be inserted. note that this option cannot be used with option -i, as topology does not support the use of multiple sourceIds
-d
to enable the topology data to be loaded. It is suggested that the user does not use option-k
in this scenario, unless they conducted training for the sample data already. An example usage is:
Thekubectl run load-sample-events \ -it --restart=Never --env=LICENSE=accept \ --command=true \ --env=HTTP_USERNAME=$HTTP_USERNAME \ --env=HTTP_PASSWORD=$HTTP_PASSWORD \ --env=ADMIN_PASSWORD=$ADMIN_PASSWORD \ --env=JDBC_PASSWORD=$OMNIBUS_OS_PASSWORD \ --overrides='{ "apiVersion": "v1", "spec": { "imagePullSecrets": [{"name": "noi-registry-secret"}] } }' \ $ASM_IMAGE -- ./entrypoint.sh loadSampleData.sh -r $RELEASE_NAME -z $OMNIBUS_OS_PASSWORD -t cfd95b7e-3bc7-4006-a4a8-a73a79c71255 -o primaryOSHostname and the -p primaryOSPort -d
--overrides
section is optional, and can be removed in certain circumstances.
- Configure the environment variables that the