Training with topology and event data

To learn about cloud native analytics, you can install topology and event data sets. Learn how to install and load topology and event data, train the system and see the results.

Before you begin

The events can be either on the cloud or on-premises. It is advised to use persistence for the demo scenario. Before you complete these steps, complete the following prerequisite items:
  • Ensure that topology management is enabled and deployed on the cloud.
  • Ensure that the file observer under the topology section of the deployment configuration.
  • Ensure that the Kube observer is desired and recommended under the topology section on the deployment configuration.

About this task

Insert data into topology management to trigger status mapping and subtopology correlation.


  1. Load the topology sample data: you need to generate an instance of the event-tooling pod to load demo topology data. You need to provide the topology system username, password, and external API routes to the container. To configure and generate this pod:
    1. Obtain the topology base route, system username, and password by running the command:
      export ASM_USERNAME=$(kubectl get secret $(kubectl get secret | grep topology-asm-credentials | awk '{ print $1 }') -o jsonpath --template '{.data.username}'  | base64 --decode)
      export ASM_PASSWORD=$(kubectl get secret $(kubectl get secret | grep topology-asm-credentials | awk '{ print $1 }') -o jsonpath --template '{.data.password}'  | base64 --decode)
      export ASM_HOSTNAME="https://$(oc get routes | grep topology | awk '{ print $2 }' | awk 'NR==1{print $1}')"
      export ASM_IMAGE=$(kubectl get noi -o yaml | grep ea-events-tooling | grep "image=" | sed 's/\\//g')
    2. Generate the events tooling pod by running the script. The usage of the script is as follows:
      Description: Load data into the topology including resources and templates that correspond
      To the script. Note that this script requires a file observer be active in
      the asm system.
        /app/bin/ -l <hostname> -u <asm username> -p <asm password>
      Required Parameters: 
       -l  : The ASM core route or hostname including the protocol definition. Eg:
              /app/bin/  -l
       -u  : The asm system user name, typically found in the secret releasename-topology-asm-credentials
              /app/bin/  -l ... -u asm -p asm
       -p  : The asm system user password, typically found in the secret releasename-topology-asm-credentials
              /app/bin/  -l ... -u asm -p asm
      Where the parameters -l, -u, and -p correspond to the values currently in the environment variables $ASM_USERNAME, $ASM_PASSWORD, and $ASM_HOSTNAME. You can construct the events tooling pod with these environment variables, and run the script by using these environment variables as arguments. The image must be set for your ea-events-tooling image from your registry. To get the correct image name, run the following commands:
      Cloud deployment:
      kubectl get noi -o yaml | grep ea-events-tooling | grep "image=" | sed 's/\\//g'

      Hybrid deployment:
      kubectl get noihybrid -o yaml | grep ea-events-tooling | grep "image=" | sed 's/\\//g'

      IBM® Netcool® for AIOps deployment:
      oc get csv <noi-operator> -o yaml | grep olm.relatedImage.NOI_ea-events-tooling: | awk -F ': ' '{print $2}'
      Where <noi-operator> is the noi-operator CSV file name.
    3. Run the script by using the following command:
      kubectl run load-topology-sample-data \ 
         -it --restart=Never --env=LICENSE=accept --command=true \
         --env=ASM_PASSWORD=$ASM_PASSWORD   \
         --env=ASM_HOSTNAME=$ASM_HOSTNAME  \
    4. Override the script to use the registry secret to pull the image. Set the overrides section to contain the correct imagePullSecret as follows:
      kubectl run load-topology-sample-data \ 
         -it --restart=Never --env=LICENSE=accept --command=true \
         --env=ASM_PASSWORD=$ASM_PASSWORD   \
         --env=ASM_HOSTNAME=$ASM_HOSTNAME  \
         --overrides='{ "apiVersion": "v1", "spec": { "imagePullSecrets": [{"name": "noi-registry-secret"}] } }' \
      The script will:
      • Upload sample topology sample data through the File Observer REST API called topologySampleData.
      • Generate a file observer job to upload that data into the topology system.
      • Generate a dynamic Service to Host sample topology template, which will be used to demonstrate probable root cause and topology correlation based on the events that are uploaded in the next stage.
  2. Load sample events: upload historical events to the EA ingestion service, run training for seasonality and temporal patterns analytics and insert corresponding live events to display the analytics in the Alert Viewer.
    1. Configure the environment variables that the ea-events-tooling pod uses. Update the releasename value and run the command:
      export RELEASE_NAME=yourreleasename
      export HTTP_USERNAME=$(kubectl get secret  "$RELEASE_NAME-systemauth-secret" -o jsonpath --template '{.data.username}'  | base64 --decode)
      export HTTP_PASSWORD=$(kubectl get secret  "$RELEASE_NAME-systemauth-secret" -o jsonpath --template '{.data.password}'  | base64 --decode)
      export ADMIN_PASSWORD=$(kubectl get secret  "$RELEASE_NAME-systemauth-secret" -o jsonpath --template '{.data.password}'  | base64 --decode)
      export OMNIBUS_OS_PASSWORD=$(kubectl get secret  "$RELEASE_NAME-omni-secret" -o jsonpath --template '{.data.OMNIBUS_ROOT_PASSWORD}' | base64 --decode)
      This command sets the following environment variables that will be used as arguments to the next step: HTTP_USERNAME, HTTP_PASSWORD, ADMIN_PASSWORD, OMNIBUS_OS_PASSWORD.
    2. Run the script to start a new pod by using the ea-events-tooling image that loads the events. This uses the same loadSampleData script as is usually used to load sample events. An extra argument must be supplied to load live events that correspond to the topology sample data. The usage of the script is:
      /app/bin/ -r <releasename> [-t tenant ID] [-h] [-j] [-k] [-o primaryOSHostname] \ 
                            [-p primaryOSPort] [-x primaryOSUsername] [-z primaryOSPassword] \
                            [-s dockerRegistrySecret] \
                            [-a serviceAccountName] [-s dockerRegistrySecret] \
                            [-e secretrelease] [-i sourceids] [-d]
        Training is run on both the related-events and seasonal-events algorithms
        The tenantid option controls the tenantid that data is trained against. It should only 
        be specified if the tenantid has been changed from the derived default in the 
        noiusers credentials section of the values.yaml. 
        The default tenantid is 'cfd95b7e-3bc7-4006-a4a8-a73a79c71255' if not specified
        -j generates YAML to define a job to invoke this script instead of running it directly
         If specifying -j, set --env=CONTAINER_IMAGE= to the same value you specified for --image 
         in the kubectl run command so that the image location is correctly specifed in the job.
        -k stops all training for patterns / temporal grouping
        -d enables live events suitable for the topology based data scenario to be inserted.
           note that this option cannot be used with option -i, as topology does not support
           the use of multiple sourceIds
      You need to supply the argument -d to enable the topology data to be loaded. It is suggested that the user does not use option -k in this scenario, unless they conducted training for the sample data already. An example usage is:
      kubectl run load-sample-events \
         -it --restart=Never --env=LICENSE=accept  \
         --command=true     \
          --overrides='{ "apiVersion": "v1", "spec": { "imagePullSecrets": [{"name": "noi-registry-secret"}] } }' \
          $ASM_IMAGE -- ./ -r $RELEASE_NAME -z $OMNIBUS_OS_PASSWORD -t cfd95b7e-3bc7-4006-a4a8-a73a79c71255 -o primaryOSHostname and the -p primaryOSPort -d
      The --overrides section is optional, and can be removed in certain circumstances.


You successfully loaded demo scenarios that include topology and event data.