Configuring the Cloud system
-
Run the following commands to expose the ObjectServer services as node ports.
Important: Replace
<name>
with the name of the IBM Cloud Pak® for Watson AIOps installation.cat << EOF | oc apply -f - apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/component: ncoprimary app.kubernetes.io/instance: <name> app.kubernetes.io/managed-by: ir-core-operator app.kubernetes.io/name: IssueResolutionCore name: aiops-ir-core-ncoprimary-np spec: ports: - name: tds port: 4100 protocol: TCP targetPort: 4100 selector: app.kubernetes.io/component: ncoprimary app.kubernetes.io/instance: <name> app.kubernetes.io/managed-by: ir-core-operator app.kubernetes.io/name: IssueResolutionCore sessionAffinity: None type: NodePort EOF
cat << EOF | oc apply -f - apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/component: ncobackup app.kubernetes.io/instance: <name> app.kubernetes.io/managed-by: ir-core-operator app.kubernetes.io/name: IssueResolutionCore name: aiops-ir-core-ncobackup-np spec: ports: - name: tds port: 4100 protocol: TCP targetPort: 4100 selector: app.kubernetes.io/component: ncobackup app.kubernetes.io/instance: <name> app.kubernetes.io/managed-by: ir-core-operator app.kubernetes.io/name: IssueResolutionCore sessionAffinity: None type: NodePort EOF
-
Get the service node port for the primary:
NCOPRIMARY_NODEPORT=$(oc get svc <name>-ir-core-ncoprimary-np -o jsonpath='{.spec.ports[?(@.port==4100)].nodePort}')
-
Get the service node port for the backup:
NCOBACKUP_NODEPORT=$(oc get svc <name>-ir-core-ncobackup-np -o jsonpath='{.spec.ports[?(@.port==4100)].nodePort}')
-
Get the node internal IP addresses:
oc get node -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}'
-
Depending on the configuration of your cluster, you might need to add network policies to allow network traffic from outside the cluster. To test this, use the commands below. If you are using a proxy such as HA proxy (see below) you should run the test on the host running the proxy. Otherwise, run the test on the host running your on-premise system. Each of the following commands should output
CONNECTED
, followed by aCertificate chain
with the first subject containingncoprimary
orncobackup
.openssl s_client -connect ${WORKER0_IP}:${NCOPRIMARY_NODEPORT} < /dev/null openssl s_client -connect ${WORKER0_IP}:${NCOBACKUP_NODEPORT} < /dev/null
-
If the test in step 5 fails, you might need to add network policies. The ingress "from" rules are specific to your cluster and the clients or proxies that you want to connect to the ObjectServers. The rest of the policies should look as follows:
--- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: ncoprimary-external-ingress spec: ingress: - from: # Add ingress rules required for your cluster and on-premises system here ports: - port: 4100 protocol: TCP podSelector: matchLabels: app.kubernetes.io/component: ncoprimary app.kubernetes.io/instance: aiops policyTypes: - Ingress ... --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: ncobackup-external-ingress spec: ingress: - from: # Add ingress rules required for your cluster and on-premises system here ports: - port: 4100 protocol: TCP podSelector: matchLabels: app.kubernetes.io/component: ncobackup app.kubernetes.io/instance: aiops policyTypes: - Ingress ...
-
Next, you must update any proxies and firewalls to ensure that the exposed node ports are reachable. For example, if you have a HA proxy in front of your Red Hat OpenShift Container Platform cluster, you only need to update the HA proxy configuration that is running on the main control node. This routes that port to the master and worker nodes in the cluster. When you are logged on to that cluster, perform the following steps:
-
Run the command:
systemctl stop haproxy.service
-
Add the following entries to
/etc/haproxy/haproxy.cfg
:frontend ingress-ncoprimary bind *:${NCOPRIMARY_NODEPORT} default_backend ingress-ncoprimary mode tcp option tcplog backend ingress-ncoprimary balance source mode tcp server master0 ${MASTER0_IP}:${NCOPRIMARY_NODEPORT} check server master1 ${MASTER1_IP}:${NCOPRIMARY_NODEPORT} check server master2 ${MASTER2_IP}:${NCOPRIMARY_NODEPORT} check server worker0 ${WORKER0_IP}:${NCOPRIMARY_NODEPORT} check server worker1 ${WORKER1_IP}:${NCOPRIMARY_NODEPORT} check server worker2 ${WORKER2_IP}:${NCOPRIMARY_NODEPORT} check frontend ingress-ncobackup bind *:${NCOBACKUP_NODEPORT} default_backend ingress-ncobackup mode tcp option tcplog backend ingress-ncobackup balance source mode tcp server master0 ${MASTER0_IP}:${NCOBACKUP_NODEPORT} check server master1 ${MASTER1_IP}:${NCOBACKUP_NODEPORT} check server master2 ${MASTER2_IP}:${NCOBACKUP_NODEPORT} check server worker0 ${WORKER0_IP}:${NCOBACKUP_NODEPORT} check server worker1 ${WORKER1_IP}:${NCOBACKUP_NODEPORT} check server worker2 ${WORKER2_IP}:${NCOBACKUP_NODEPORT} check
-
Run the command:
systemctl start haproxy.service
-
-
Run the following command to retrieve the challenge credentials for the probe/gateway:
oc get secret <name>-ir-core-omni-secret -o=jsonpath='{.data.OMNIBUS_PROBE_PASSWORD}' | base64 -d