Configuring the Cloud system

  1. Run the following commands to expose the ObjectServer services as node ports.

    Ensure that you replace <name> with the name of your IBM Cloud Pak® for AIOps instance.

    For more information about the configured IBM Cloud Pak for AIOps ports, see Ports.

    cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app.kubernetes.io/component: ncoprimary
        app.kubernetes.io/instance: aiops
        app.kubernetes.io/managed-by: ir-core-operator
        app.kubernetes.io/name: IssueResolutionCore
      name: aiops-ir-core-ncoprimary-np
    spec:
      ports:
      - name: tds
        port: 4100
        protocol: TCP
        targetPort: 4100
      selector:
        app.kubernetes.io/component: ncoprimary
        app.kubernetes.io/instance: aiops
        app.kubernetes.io/managed-by: ir-core-operator
        app.kubernetes.io/name: IssueResolutionCore
      sessionAffinity: None
      type: NodePort
    EOF
    
    cat << EOF | oc apply -f -
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app.kubernetes.io/component: ncobackup
        app.kubernetes.io/instance: aiops
        app.kubernetes.io/managed-by: ir-core-operator
        app.kubernetes.io/name: IssueResolutionCore
      name: aiops-ir-core-ncobackup-np
    spec:
      ports:
      - name: tds
        port: 4100
        protocol: TCP
        targetPort: 4100
      selector:
        app.kubernetes.io/component: ncobackup
        app.kubernetes.io/instance: aiops
        app.kubernetes.io/managed-by: ir-core-operator
        app.kubernetes.io/name: IssueResolutionCore
      sessionAffinity: None
      type: NodePort
    EOF
    
  2. Get the service node port for the primary:

    NCOPRIMARY_NODEPORT=$(oc get svc aiops-ir-core-ncoprimary-np -o jsonpath='{.spec.ports[?(@.port==4100)].nodePort}')
    
  3. Get the service node port for the backup:

    NCOBACKUP_NODEPORT=$(oc get svc aiops-ir-core-ncobackup-np -o jsonpath='{.spec.ports[?(@.port==4100)].nodePort}')
    
  4. Get the node internal IP addresses:

    oc get node -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}'
    

    The output of the command returns the master node IP addresses and worker node IP addresses. You can run the following command to confirm which IP address belongs to which node.

    oc get nodes -o wide | grep <NAME_OF_MASTER_OR_WORKER_NODE>
    

    Where <NAME_OF_MASTER_OR_WORKER_NODE> is the name of the master node or worker node. For example, the name can be master1 or worker1.

  5. Depending on the configuration of your cluster, you might need to add network policies to allow network traffic from outside the cluster. To test this, use the commands below. If you are using a proxy such as HA proxy (see below) you should run the test on the host running the proxy. Otherwise, run the test on the host running your on-premise system. Each of the following commands should output CONNECTED, followed by a Certificate chain with the first subject containing ncoprimary or ncobackup.

    openssl s_client -connect <WORKER0_IP>:${NCOPRIMARY_NODEPORT} < /dev/null
    openssl s_client -connect <WORKER0_IP>:${NCOBACKUP_NODEPORT} < /dev/null
    

    Where

    • <WORKER0_IP> is the value of worker IP address from the node internal IP addresses command.
    • ${NCOPRIMARY_NODEPORT} is the service node port for the primary ObjectServer.
    • ${NCOBACKUP_NODEPORT} is the service node port for the backup ObjectServer.
  6. If the test in step 5 fails, you might need to add network policies. The ingress "from" rules are specific to your cluster and the clients or proxies that you want to connect to the ObjectServers. The rest of the policies should look as follows:

    ---
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: ncoprimary-external-ingress
      namespace: aiops_namespace
    spec:
      ingress:
      - from:
           # Add ingress rules required for your cluster and on-premises system here
        ports:
        - port: 4100
          protocol: TCP
      podSelector:
        matchLabels:
          app.kubernetes.io/component: ncoprimary
          app.kubernetes.io/instance: aiops
      policyTypes:
      - Ingress
    ...
    ---
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: ncobackup-external-ingress
    spec:
      ingress:
      - from:
           # Add ingress rules required for your cluster and on-premises system here
        ports:
        - port: 4100
          protocol: TCP
      podSelector:
        matchLabels:
          app.kubernetes.io/component: ncobackup
          app.kubernetes.io/instance: aiops
      policyTypes:
      - Ingress
    ...
    
  7. Next, you must update any proxies and firewalls to ensure that the exposed node ports are reachable. For example, if you have a HA proxy in front of your Red Hat OpenShift Container Platform cluster, you only need to update the HA proxy configuration that is running on the main control node. This routes that port to the master and worker nodes in the cluster. When you are logged on to that cluster, perform the following steps:

    1. Run the command:

      systemctl stop haproxy.service
      
    2. Add the following entries to /etc/haproxy/haproxy.cfg:

      frontend ingress-ncoprimary
          bind *:<NCOPRIMARY_NODEPORT>
          default_backend ingress-ncoprimary
          mode tcp
          option tcplog
      
      backend ingress-ncoprimary
          balance source
          mode tcp
          server master0 <MASTER0_IP>:<NCOPRIMARY_NODEPORT> check
          server master1 <MASTER1_IP>:<NCOPRIMARY_NODEPORT> check
          server master2 <MASTER2_IP>:<NCOPRIMARY_NODEPORT> check
          server worker0 <WORKER0_IP>:<NCOPRIMARY_NODEPORT> check
          server worker1 <WORKER1_IP>:<NCOPRIMARY_NODEPORT> check
          server worker2 <WORKER2_IP>:<NCOPRIMARY_NODEPORT> check
      
      frontend ingress-ncobackup
          bind *:<NCOBACKUP_NODEPORT>
          default_backend ingress-ncobackup
          mode tcp
          option tcplog
      
      backend ingress-ncobackup
          balance source
          mode tcp
          server master0 <MASTER0_IP>:<NCOBACKUP_NODEPORT> check
          server master1 <MASTER1_IP>:<NCOBACKUP_NODEPORT> check
          server master2 <MASTER2_IP>:<NCOBACKUP_NODEPORT> check
          server worker0 <WORKER0_IP>:<NCOBACKUP_NODEPORT> check
          server worker1 <WORKER1_IP>:<NCOBACKUP_NODEPORT> check
          server worker2 <WORKER2_IP>:<NCOBACKUP_NODEPORT> check
      

      Where

      • <MASTER0_IP>, <MASTER1_IP>, and <MASTER2_IP> are the values of master IP addresses from node internal IP addresses command.
      • <WORKER0_IP>, <WORKER1_IP>, and <WORKER2_IP> are the values of worker IP addresses from node internal IP addresses command.
      • <NCOPRIMARY_NODEPORT> is the value of service node port for the primary ObjectServer.
      • <NCOBACKUP_NODEPORT> is the value of service node port for the backup ObjectServer.
    3. Run the command:

      systemctl start haproxy.service
      
  8. Run the following command to retrieve the challenge credentials for the probe:

    oc get secret aiops-ir-core-omni-secret -o=jsonpath='{.data.OMNIBUS_PROBE_PASSWORD}' | base64 -d