Online starter installation of IBM Cloud Pak for AIOps (CLI)
If you are a trial user of IBM Cloud Pak for AIOps, or you want a proof-of-concept deployment that does not require a sustained workload, consider a starter installation to get a smaller, nonproduction deployment up and running quickly.
If you are installing on Azure Red Hat OpenShift (ARO), Google Cloud Platform (GCP), Red Hat OpenShift on IBM Cloud service (ROKS), or Red Hat OpenShift Service on AWS (ROSA), then review the information in Installing on cloud platforms.
If you require a production-grade deployment, or an offline air-gapped deployment, see Installing IBM Cloud Pak for AIOps.
Before you begin
- You must know whether you are deploying a base deployment or a extended deployment of IBM Cloud Pak for AIOps. For more information, see Incremental adoption.
- Review the Planning section, and ensure that your cluster meets the system requirements for a starter deployment.
- Online installations of IBM Cloud Pak for AIOps can be run entirely as a non-root user, and do not require that user to have sudo access.
- Ensure that you are logged in to your Red Hat OpenShift cluster with
oc loginfor any steps that use the Red Hat OpenShift command-line interface (CLI). - If you require details about the permissions that the IBM Cloud Pak for AIOps operators need, see Permissions (IBM Cloud Pak for AIOps).
- A user with
cluster-adminprivileges is needed for the following operations:
- After installation, you cannot increase the size of your IBM Cloud Pak for AIOps deployment from a starter deployment to a larger production deployment.
- High availability is not supported for starter deployments.
- This starter installation uses the
OwnNamespaceinstallation mode, which is the recommended installation mode for IBM Cloud Pak for AIOps, to provide better resilience and performance. For more information, see Operator installation mode. - If IBM Sales representatives and Business Partners supplied you with a custom profile ConfigMap to customize your deployment, then you must follow their instructions to apply it during installation. The custom profile cannot be applied after installation, and attempting to do so can break your IBM Cloud Pak for AIOps deployment. For more information about custom sizing, see Custom sizing.
- Ensure that Instana AutoTrace is disabled for the IBM Cloud Pak for AIOps namespace. For more information, see Instana AutoTrace causes pod eviction and prevents install and upgrade.
- Increase the ephemeral storage for the Flink pods. For more information, see Monitoring with Instana can cause pod eviction if there is insufficient ephemeral storage.
Prerequisites
Allow access to the following sites and ports:
| Site | Description |
|---|---|
icr.io
|
Allow access to these hosts on port 443 to enable access to the IBM Cloud Container Registry and IBM Cloud Pak foundational services catalog source. |
dd1-icr.ibm-zh.com
|
If you are located in China, also allow access to these hosts on port 443. |
github.com
|
Github houses IBM Cloud Pak tools and scripts. |
redhat.com
|
Red Hat OpenShift registries that are required for Red Hat OpenShift, and for Red Hat OpenShift upgrades. |
For more information, see Configuring your firewall for OpenShift Container Platform.
You must be able to download content from GitHub. If you are not able to, verify that your network or proxy settings permit access to GitHub's file server domain and if needed contact your network administrator to allow it.
1. Install and configure Red Hat OpenShift Container Platform
IBM Cloud Pak for AIOps requires Red Hat OpenShift to be installed and running. You must have administrative access to your Red Hat OpenShift cluster.
For more information about the supported versions of Red Hat OpenShift, see Supported Red Hat OpenShift Container Platform versions.
-
Before installing Red Hat OpenShift, work with your system administrator to verify that the nodes that are intended for the installation have their system clocks synchronized with an NTP server, or are at least manually set to be within a few seconds of one another. If you are installing on a cloud platform, this is usually already configured.
-
Install Red Hat OpenShift by using the instructions in the Red Hat OpenShift documentation
.
-
Install the Red Hat OpenShift (
oc) command-line interface (CLI) on your cluster's boot node and runoc login. For more information, see Getting started with the Red Hat OpenShift CLI.
-
To function properly, distributed platforms and applications such as Red Hat OpenShift and IBM Cloud Pak for AIOps require the system clocks of all of their nodes to be highly synchronized with one another. Discrepancies between the clocks can cause IBM Cloud Pak for AIOps to experience operational issues. All Red Hat OpenShift nodes in the cluster must have access to an NTP server to synchronize their clocks. For more information, see the Red Hat OpenShift documentation
.
-
Optionally configure a custom certificate for IBM Cloud Pak for AIOps to use. You can use either of the following methods:
- Configure a custom certificate for the Red Hat OpenShift cluster. Follow the instructions in the Red Hat OpenShift documentation Replacing the default ingress certificate.
- If you would like to use a custom certificate for the IBM Cloud Pak for AIOps console only, then after installation is complete follow the instructions in Using a custom certificate.
2. Configure storage
Configure your own storage for use with IBM Cloud Pak for AIOps. For more information, see Storage.
RWX-storage-class-name in the installation instance CR YAML file. This configuration cannot be changed after IBM Cloud Pak for AIOps is installed.Configure environment variables for storage, depending on whether you have a deployment that uses hybrid storage.
Hybrid storage:
export IR_CORE_POSTGRES_LOCAL_STORAGE_CLASS=<ir_core_postgres_local_storage_class>
export TOPOLOGY_POSTGRES_LOCAL_STORAGE_CLASS=<topology_postgres_local_storage_class>
export KAFKA_LOCAL_STORAGE_CLASS=<kafka_local_storage_class>
-
<ir_core_postgres_local_storage_class>is the LVM storage class that you configured for IR Core Postgres in hybrid storage, for examplelvms-vg-ir-core-postgres-1 -
<topology_postgres_local_storage_class>is the LVM storage class that you configured for Topology Postgres in hybrid storage, for examplelvms-vg-topology-postgres-1 -
<kafka_local_storage_class>is the LVM storage class that you configured for Kafka in hybrid storage, for examplelvms-vg-kafka-1
Non hybrid storage:
export IR_CORE_POSTGRES_LOCAL_STORAGE_CLASS=''
export TOPOLOGY_POSTGRES_LOCAL_STORAGE_CLASS=''
export KAFKA_LOCAL_STORAGE_CLASS=''
3. Create a custom project (namespace)
-
Export an environment variable for your namespace.
export PROJECT_CP4AIOPS=cp4aiopsIf you are deploying more than one instance of IBM Cloud Pak for AIOps in the same cluster, then each instance must have a different value for
PROJECT_CP4AIOPS. If you already have a deployment of IBM Cloud Pak for AIOps in the namespacecp4aiops, then exportPROJECT_CP4AIOPSwith a different value, such ascp4aiops2. For more information, see Deploying multiple instances on a single cluster. -
Run the following command to create a project (namespace) to deploy IBM Cloud Pak for AIOps into.
oc create namespace "${PROJECT_CP4AIOPS}" -
Add a node-selector annotation to the IBM Cloud Pak for AIOps namespace.
The annotation ensures that on a multi-architecture Red Hat OpenShift cluster, IBM Cloud Pak for AIOps workloads are only scheduled on nodes that have an architecture that IBM Cloud Pak for AIOps supports.
Failure to do so might result in the scheduling and subsequent failure of IBM Cloud Pak for AIOps workloads on Red Hat OpenShift nodes that have a nonsupported architecture. For more information about supported architectures, see Supported platforms.
Run one of the following commands.
If you want to use amd64 architecture:
oc annotate namespace "${PROJECT_CP4AIOPS}" openshift.io/node-selector="kubernetes.io/arch=amd64"If you want to use s390x architecture:
oc annotate namespace "${PROJECT_CP4AIOPS}" openshift.io/node-selector="kubernetes.io/arch=s390x"
4. Create an OperatorGroup in your custom project (namespace)
Create the Operator group by running the following command:
cat << EOF | oc apply -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: cp4aiops-operator-group
namespace: ${PROJECT_CP4AIOPS}
spec:
targetNamespaces:
- "${PROJECT_CP4AIOPS}"
EOF
5. Create the entitlement key pull secret
-
Log in to MyIBM Container Software Library
with the IBMid and password details that are associated with the entitled software.
-
In the Entitlement keys section, select Copy to copy your entitlement key to the clipboard.
-
From the Red Hat OpenShift CLI, run the following command:
oc create secret docker-registry ibm-entitlement-key \ --docker-username=cp \ --docker-password=<entitlement-key> \ --docker-server=cp.icr.io \ --namespace=${PROJECT_CP4AIOPS}Where
<entitlement-key>is the entitlement key that you copied in the previous step.
6. Configure usage data collection
To help the development of IBM Cloud Pak for AIOps, daily aggregated usage data is collected to analyse how IBM Cloud Pak for AIOps is used. The usage data is collected by the cp4waiops-metricsprocessor pod, and is sent to and stored in IBM controlled GDPR-compliant systems. The collection of usage data is enabled by default, but can be disabled. For transparency, the cp4waiops-metricsprocessor pod's logs contain all the information that is collected. The usage data that is collected is numeric, or is about the deployment type and platform. It does not include email addresses, passwords, or specific details. Only the following data is collected:
- Current number of applications
- Current number of alerts (all severities aggregated)
- Current number of incidents (all priorities aggregated)
- Current number of policies (includes predefined and user created)
- Current number of runbooks run since installation
- Current number of integrations of each type (For example ServiceNow, Instana, Falcon Logscale)
- Secure tunnel enablement: whether connection (which controls whether you can create a secure tunnel) is enabled in the Installation custom resource
- Deployment type: base deployment or extended deployment
- Deployment platform: Red Hat OpenShift Container Platform or Linux
Configuring the collection of usage data
If you do not want to disable the collection of usage data, run the following steps.
-
Set environment variables.
export CUSTOMER_NAME=<your company name> export CUSTOMER_ICN=<your IBM Customer Number> export CUSTOMER_ENVIRONMENT=<Set to `trial` or `poc`> -
Configure the usage data with your customer details.
oc create secret generic aiops-metrics-processor -n ${PROJECT_CP4AIOPS} --from-literal=customerName=${CUSTOMER_NAME} --from-literal=customerICN=${CUSTOMER_ICN} --from-literal=environment=${CUSTOMER_ENVIRONMENT} -
If you have a firewall enabled, ensure that outbound traffic to https://api.segment.io is allowed.
Usage data without your customer details is still collected even if you do not create this secret. If you do not want any usage data collected, then you must run the command given in Disabling the collection of usage data.
Disabling the collection of usage data
If you want to disable the collection of usage data, run the following commands.
-
Set environment variables.
export CUSTOMER_NAME=<your company name> export CUSTOMER_ICN=<your IBM Customer Number> export CUSTOMER_ENVIRONMENT=<Set to `trial` or `poc`> -
Disable usage data collection
oc create secret generic aiops-metrics-processor -n ${PROJECT_CP4AIOPS} --from-literal=customerName=${CUSTOMER_NAME} --from-literal=customerICN=${CUSTOMER_ICN} --from-literal=environment=${CUSTOMER_ENVIRONMENT} --from-literal=enableCollection=false
You can update your usage data collection preferences after installation. For more information, see Updating usage data collection preferences.
7. Create the catalog sources
-
Run the following command to create the catalog sources for IBM Cloud Pak for AIOps and IBM Cloud Pak foundational services Cert Manager.
cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: ibm-aiops-catalog namespace: ${PROJECT_CP4AIOPS} spec: displayName: ibm-aiops-catalog publisher: IBM Content sourceType: grpc image: icr.io/cpopen/ibm-aiops-catalog@sha256:294adebdcbfb1dec82d598b4b8439c40ea51e308548207a537ff69bfdca75701 grpcPodConfig: securityContextConfig: restricted --- apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: ibm-cert-manager-catalog namespace: openshift-marketplace spec: displayName: ibm-cert-manager publisher: IBM sourceType: grpc image: icr.io/cpopen/ibm-cert-manager-operator-catalog EOF -
Verify that the ibm-aiops-catalog and ibm-cert-manager-catalog
CatalogSourceobjects are in the output that is returned by the following command:oc get CatalogSources -n openshift-marketplace oc get CatalogSource -n ${PROJECT_CP4AIOPS}Example output:
oc get CatalogSources -n openshift-marketplace NAME DISPLAY TYPE PUBLISHER AGE ibm-cert-manager-catalog ibm-cert-manager grpc IBM 2m oc get CatalogSource -n cp4aiops NAME DISPLAY TYPE PUBLISHER AGE ibm-aiops-catalog ibm-aiops-catalog grpc IBM 2m
8. Install Cert Manager
Skip this step if you already have a certificate manager installed on the Red Hat OpenShift cluster that you are installing IBM Cloud Pak for AIOps on. If you do not have a certificate manager then you must install one.
The IBM Cloud Pak foundational services Cert Manager is recommended. For more information about IBM Cloud Pak foundational services Cert Manager hardware requirements, see IBM Certificate Manager (cert-manager) hardware requirements
in the IBM Cloud Pak foundational services documentation.
The Red Hat OpenShift Cert Manager is also supported. For more information, see cert-manager Operator for Red Hat OpenShift in the Red Hat OpenShift documentation.
The IBM Cloud Pak foundational services Cert Manager can be installed with the following steps.
-
Run the following command to create the resource definitions that you need:
cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: ibm-cert-manager --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ibm-cert-manager-operator-group namespace: ibm-cert-manager --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ibm-cert-manager-operator namespace: ibm-cert-manager spec: channel: v4.2 installPlanApproval: Automatic name: ibm-cert-manager-operator source: ibm-cert-manager-catalog sourceNamespace: openshift-marketplace EOF -
Run the following command to ensure that the IBM Cloud Pak foundational services Cert Manager pods have a STATUS of Running before proceeding to the next step.
oc -n ibm-cert-manager get podsExample output for a successful IBM Cloud Pak foundational services Cert Manager installation:
NAME READY STATUS RESTARTS AGE cert-manager-cainjector-674854c49d-vstq4 1/1 Running 0 8d cert-manager-controller-646d4bd6fd-zwmqm 1/1 Running 0 8d cert-manager-webhook-8598787c8-s4lkt 1/1 Running 0 8d ibm-cert-manager-operator-c96957695-dkxnm 1/1 Running 0 8d
9. Verify cluster readiness
Run the following procedure to verify whether your environment is correctly set up for an IBM Cloud Pak for AIOps installation.
-
Download the
aiopsctlcommand line interface tool.AIOPSCTL_TAR=<aiopsctl_tar> AIOPSCTL_INSTALL_URL="https://github.com/IBM/aiopsctl/releases/download/v4.13.0/${AIOPSCTL_TAR}" curl -LO "${AIOPSCTL_INSTALL_URL}" tar xf "${AIOPSCTL_TAR}" mv aiopsctl /usr/local/bin/aiopsctlWhere
<aiopsctl_tar>is the operating system specific file that you require from the following set:aiopsctl-linux_s390x.tar.gz,aiopsctl-linux_arm64.tar.gz,aiopsctl-linux_amd64.tar.gz,aiopsctl-darwin_amd64.tar.gz,aiopsctl-darwin_arm64.tar.gz. -
Run the following command to run the precheck:
aiopsctl server precheck -n ${PROJECT_CP4AIOPS}- If you have a multi-zone cluster, then also specify the
-mflag. This flag enables extra checks to help ensure that the cluster has sufficient resources to withstand a zone outage, and that the zones are well balanced for memory and CPU. For example,aiopsctl server precheck -n cp4aiops -m. - If you are using hybrid storage, then also specify the
--hybrid-storageflag. This flag enables extra checks to help ensure that sufficient local storage is configured. For example,aiopsctl server precheck -n cp4aiops --hybrid-storage
Example output:# aiopsctl server precheck -n cp4aiops o- [25 Mar 26 16:08 GMT] Running precheck tool o- [25 Mar 26 16:08 GMT] Checking hardware resources... Total Node Count (Available Schedulable / Required): 6/6 Production (HA) Base CPU (vCPU): 93 / 143 Production (HA) Base Memory (GB): 191 / 331 Production (HA) Extended CPU (vCPU): 93 / 170 Production (HA) Extended Memory (GB): 191 / 391 Total Node Count (Available Schedulable / Required): 6/3 Starter (Non-HA) Base CPU (vCPU): 93 / 47 Starter (Non-HA) Base Memory (GB): 191 / 123 Starter (Non-HA) Extended CPU (vCPU): 93 / 55 Starter (Non-HA) Extended Memory (GB): 191 / 136 You have enough resources for 1 instance(s) of small Base install You have enough resources for 1 instance(s) of small Extended install minimum hardware requirements met for starter size, but not production size o- [25 Mar 26 16:08 GMT] Checking storage... Required StorageClasses found for provider Red Hat OpenShift Data Foundation: ocs-storagecluster-cephfs, ocs-storagecluster-ceph-rbd Checking if PVC can bind to supported storage class Verifying storage provider functionality: elapsed time [2s], estimated time [1m0s] Verifying storage provider functionality: elapsed time [4s], estimated time [1m0s] Storage check passed o- [25 Mar 26 16:08 GMT] Checking OCP Version... Cluster meets OCP version requirements o- [25 Mar 26 16:08 GMT] Checking if Cert Manager is present... Certificate CustomResourceDefinition Found o- [25 Mar 26 16:08 GMT] Checking if certs will expire within 4 days... o- [25 Mar 26 16:08 GMT] Precheck Summary Results Check Result Meets Hardware Requirements Passed No Storage Issues Passed Meets OCP Version Requirement Passed Cert Mgr Operator Exists Passed Certificates Valid Passed- The
"You have enough resources for <...>"statements denote the number of instances that the cluster can support, and include any existing instances. The number that is given is for the number of base instances or extended instances. It does not mean that the cluster can support the stated number of base instances and the stated number of extended instances. - If you are not using IBM Cloud Pak foundational services Cert Manager, then ignore any errors that are returned by the Cert Manager check.
- If you have a multi-zone cluster, then also specify the
10. Install the IBM Cloud Pak for AIOps operator
For more information about the operators which are installed with IBM Cloud Pak for AIOps, see Operator Details.
Run the following command:
cat << EOF | oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: ibm-aiops-orchestrator
namespace: ${PROJECT_CP4AIOPS}
spec:
channel: v4.13
installPlanApproval: Automatic
name: ibm-aiops-orchestrator
source: ibm-aiops-catalog
sourceNamespace: ${PROJECT_CP4AIOPS}
EOF
installPlanApproval must not be changed to Manual. Manual approval, which requires the manual review and approval of the generated InstallPlans, is not supported. Incorrect timing or ordering of manual approvals of InstallPlans can result in a failed installation.
After a few minutes, the IBM Cloud Pak for AIOps operator is installed. Verify that the all of the components have a state of Succeeded by running the following command:
oc get csv -n ${PROJECT_CP4AIOPS} | egrep "ibm-aiops-orchestrator"
Example output:
$ oc get csv -n ${PROJECT_CP4AIOPS} | egrep "ibm-aiops-orchestrator"
ibm-aiops-orchestrator.v4.13.0 IBM Cloud Pak for AIOps 4.13.0 Succeeded
11. Install IBM Cloud Pak for AIOps
Run the following command to create an instance of the IBM Cloud Pak for AIOps custom resource called ibm-cp-aiops.
The pakModules aiopsFoundation, applicationManager, and aiManager must be enabled as in the following YAML. Do not change these values to false.
cat << EOF | oc apply -f -
apiVersion: orchestrator.aiops.ibm.com/v1alpha1
kind: Installation
metadata:
name: ibm-cp-aiops
namespace: ${PROJECT_CP4AIOPS}
spec:
imagePullSecret: ibm-entitlement-key
license:
accept: <license_acceptance>
pakModules:
- name: aiopsFoundation
enabled: true
- name: applicationManager
enabled: true
- name: aiManager
enabled: true
- name: connection
enabled: false
- name: logAnomalyDetection
enabled: <enable_log_anomaly_detection>
size: small
storage:
aiops-ir-core-postgres:
storageClass: ${IR_CORE_POSTGRES_LOCAL_STORAGE_CLASS}
aiops-ir-core-postgres-wal:
storageClass: ${IR_CORE_POSTGRES_LOCAL_STORAGE_CLASS}
aiops-topology-postgres:
storageClass: ${TOPOLOGY_POSTGRES_LOCAL_STORAGE_CLASS}
aiops-topology-postgres-wal:
storageClass: ${TOPOLOGY_POSTGRES_LOCAL_STORAGE_CLASS}
data-iaf-system-kafka:
storageClass: ${KAFKA_LOCAL_STORAGE_CLASS}
storageClass: <storage_class_name>
storageClassLargeBlock: <large_block_storage_class_name>
topologyModel: application
EOF
-
<license_acceptance>- is set to true to agree to the license terms. -
<enable_log_anomaly_detection>- set to true to install an extended deployment with log anomaly detection and ticket analysis capabilities enabled. Set to false to install a base deployment without log anomaly detection and ticket analysis capabilities enabled. For more information, see Incremental adoption. -
<storage_class_name>and<large_block_storage_class_name>are the storage classes that you want to use, as detailed in the following table. For more information about storage, see Storage class summary table.
12. Verify your installation
Run the following command to check that the PHASE of your installation is Updating.
oc get installations.orchestrator.aiops.ibm.com -n ${PROJECT_CP4AIOPS}
NAME PHASE LICENSE STORAGECLASS STORAGECLASSLARGEBLOCK AGE
ibm-cp-aiops Updating Accepted rook-cephfs rook-ceph-block 3m
It takes around 60-90 minutes for the installation to complete (subject to the speed with which images can be pulled). When installation is complete and successful, the PHASE of your installation changes to Running. If your installation phase does not change to Running, then use the following command to find out which components are not ready:
oc get installation.orchestrator.aiops.ibm.com -o yaml -n ${PROJECT_CP4AIOPS} | grep 'Not Ready'
lifecycleservice: Not Ready
zenservice: Not Ready
To see details about why a component is Not Ready run the following command, where <component> is the component that is not ready, for example zenservice.
oc get <component> -o yaml -n ${PROJECT_CP4AIOPS}
(Optional) You can also download and run a status checker script to see information about the status of your deployment. For more information about how to download and run the script, see github.com/IBM
.
If the installation fails, or is not complete and is not progressing, then see Troubleshooting installation and upgrade and Known Issues to help you identify any installation problems.
13. Verify local storage
If your IBM Cloud Pak for AIOps deployment does not use hybrid storage then skip this step.
If your IBM Cloud Pak for AIOps deployment uses hybrid storage, verify that local storage has been correctly created for Postgres and Kafka.
-
Verify IR Core Postgres local storage.
-
Run the following command to verify that IR Core Postgres pods are scheduled on the nodes that you configured for Postgres local storage.
oc get pod -l "k8s.enterprisedb.io/cluster=aiops-ir-core-postgres" -o wide -n ${PROJECT_CP4AIOPS}Example output, where Postgres pods are scheduled on nodes named worker9, worker10 and worker11.
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES aiops-ir-core-postgres-1 1/1 Running 0 25m 10.254.68.38 worker11.example.com <none> <none> aiops-ir-core-postgres-2 1/1 Running 0 23m 10.254.60.30 worker9.example.com <none> <none> aiops-ir-core-postgres-3 1/1 Running 0 22m 10.254.36.87 worker10.example.com <none> <none>Keep a copy of this output that maps PVCs to nodes so that you have this information available if you need to restore your deployment. Add this information to the file that you created for step 2.4 of Hybrid storage.
-
Run the following command to verify that logical volumes have been created for IR Core Postgres.
oc get pvc -l "k8s.enterprisedb.io/cluster=aiops-ir-core-postgres" -o wide -n ${PROJECT_CP4AIOPS}Example output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE VOLUMEMODE aiops-ir-core-postgres-1 Bound pvc-d4e70a18-d791-4d6f-9c52-af8235417265 50Gi RWO lvms-vg-ir-core-postgres-1 <unset> 38m Filesystem aiops-ir-core-postgres-1-wal Bound pvc-271f246d-8726-4823-9ab7-26fd0622f50b 10Gi RWO lvms-vg-ir-core-postgres-1 <unset> 38m Filesystem aiops-ir-core-postgres-2 Bound pvc-f3e0c1f1-7e18-4d60-895e-623b9e9b4cc1 50Gi RWO lvms-vg-ir-core-postgres-1 <unset> 39m Filesystem aiops-ir-core-postgres-2-wal Bound pvc-362e47cd-4124-4f95-995f-55e216b804b4 10Gi RWO lvms-vg-ir-core-postgres-1 <unset> 39m Filesystem aiops-ir-core-postgres-3 Bound pvc-b08d6313-d262-4e9a-baf7-09b6fdbacf1c 50Gi RWO lvms-vg-ir-core-postgres-1 <unset> 39m Filesystem aiops-ir-core-postgres-3-wal Bound pvc-6bbb154e-1845-44b5-a92c-00f22b41e1c5 10Gi RWO lvms-vg-ir-core-postgres-1 <unset> 39m Filesystem
-
-
Verify Topology Postgres local storage.
-
Run the following command to verify that the Topology Postgres pods are scheduled on the nodes that you configured for Postgres local storage.
oc get pod -l "k8s.enterprisedb.io/cluster=aiops-topology-postgres" -o wide -n ${PROJECT_CP4AIOPS}Example output, where Postgres pods are scheduled on nodes named worker9, worker10 and worker11.
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES aiops-topology-postgres-1 1/1 Running 0 83m 10.254.52.70 worker12.example.com <none> <none> aiops-topology-postgres-2 1/1 Running 0 93m 10.254.56.20 worker13.example.com <none> <none> aiops-topology-postgres-3 1/1 Running 0 86m 10.254.12.52 worker14.example.com <none> <none>Keep a copy of this output that maps PVCs to nodes so that you have this information available if you need to restore your deployment. Add this information to the file that you created for step 2.4 of Hybrid storage.
-
Run the following command to verify that logical volumes have been created for topology Postgres.
oc get pvc -l "k8s.enterprisedb.io/cluster=aiops-topology-postgres" -o wide -n ${PROJECT_CP4AIOPS}Example output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE VOLUMEMODE aiops-topology-postgres-1 Bound pvc-b2b29420-a0b8-469a-99e4-94b359487417 50Gi RWO lvms-vg-topology-postgres-1 <unset> 95m Filesystem aiops-topology-postgres-1-wal Bound pvc-6bbb154e-1845-44b5-a92c-00f22b41e1c5 10Gi RWO lvms-vg-topology-postgres-1 <unset> 95m Filesystem aiops-topology-postgres-2 Bound pvc-3bb7fca3-f722-4bba-9ed9-ec2d46279223 50Gi RWO lvms-vg-topology-postgres-1 <unset> 95m Filesystem aiops-topology-postgres-2-wal Bound pvc-5cbbe8e9-7472-4e70-826b-89e22bea4250 10Gi RWO lvms-vg-topology-postgres-1 <unset> 95m Filesystem aiops-topology-postgres-3 Bound pvc-d6de7642-dc24-42b9-bd87-11d7b882cfc6 50Gi RWO lvms-vg-topology-postgres-1 <unset> 95m Filesystem aiops-topology-postgres-3-wal Bound pvc-5a76bd34-c570-4118-9316-33398cb83a16 10Gi RWO lvms-vg-topology-postgres-1 <unset> 95m Filesystem
-
-
Verify Kafka local storage.
-
Run the following command to verify that Kafka pods are scheduled on the nodes that you configured for Kafka local storage.
oc get pod -l ibmevents.ibm.com/name=iaf-system-kafka -o wide -n ${PROJECT_CP4AIOPS}Example output, where Kafka pods are scheduled on nodes named worker6, worker7 and worker8.
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES iaf-system-kafka-0 1/1 Running 0 118m 10.123.24.14 worker6.example.com <none> <none> iaf-system-kafka-1 1/1 Running 0 118m 10.123.28.12 worker7.example.com <none> <none> iaf-system-kafka-2 1/1 Running 0 118m 10.123.56.27 worker8.example.com <none> <none>Keep a copy of this output that maps PVCs to nodes so that you have this information available if you need to restore your deployment. Add this information to the file that you created for step 2.4 of Hybrid storage.
-
Run the following command to verify that logical volumes have been created for Kafka.
oc get pvc -l ibmevents.ibm.com/name=iaf-system-kafka -n ${PROJECT_CP4AIOPS}Example output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-iaf-system-kafka-0 Bound pvc-fc36a8c8-7b73-473b-86a4-d4686228d521 100Gi RWO lvms-vg-kafka-1 124m data-iaf-system-kafka-1 Bound pvc-2f653468-95a0-4d5b-b698-64023e25f643 100Gi RWO lvms-vg-kafka-1 124m data-iaf-system-kafka-2 Bound pvc-c477e794-c825-4b58-af68-8a40b290126d 100Gi RWO lvms-vg-kafka-1 124m
-
14. Log in to the IBM Cloud Pak for AIOps console
-
Find the password for the
adminusername by running the following command:oc -n ${PROJECT_CP4AIOPS} get secret platform-auth-idp-credentials -o jsonpath='{.data.admin_password}' | base64 -d -
Find the URL to access the IBM Cloud Pak for AIOps console with the following command.
oc get route -n ${PROJECT_CP4AIOPS} cpd -o jsonpath='{.spec.host}'The following output is a sample output:cpd-cp4aiops.apps.mycluster.mydomainBased on the sample output, your console URL would be
https://cpd-cp4aiops.apps.mycluster.mydomain -
Enter the URL in your browser to open the IBM Cloud Pak for AIOps console and log in with a username of
cpadminand the password that you found in the previous step.
What to do next
- Define integrations and applications with Defining.
- You can integrate with IBM Cognos Analytics. For more information, see Integrating IBM Cognos Analytics with IBM Cloud Pak for AIOps.
- If you have an existing on-premises IBM Tivoli Netcool/OMNIbus deployment, then you can connect it to IBM Cloud Pak for AIOps through an integration. For more information, see Creating IBM Tivoli Netcool/OMNIbus integrations.
- If you have an existing on-premises IBM Tivoli Netcool/Impact deployment, then you can connect it to IBM Cloud Pak for AIOps through an integration. For more information, see Creating IBM Tivoli Netcool/Impact integrations.
- For more information about health checks and monitoring, see Health checks and monitoring. It is recommended that you implement self-monitoring checks and self-protection to improve the stability of your deployment. For more information, see Configuring and enabling OpenShift Container Platform monitoring.