In a Netcool®
Operations Insight® 1.6.3.3 environment,
CouchDB is not exposed outside the cluster by default. To replicate between CouchDB installations on
different clusters, a new Kubernetes Service object must be created which allocates a ClusterIP.
About this task
During an installation of Netcool
Operations Insight, you can define the
namespace for the installation. The following procedure requires this namespace, and it is referred
to as $namespace
in this procedure. You can retrieve a list of all available
namespaces/projects by using the oc projects
command.
Procedure
-
Use the following content to create two files that are named
couchdb-service.yaml
and couchdb-route.yaml
on a
workstation or management server. Adjust the $namespace
variable to match the
installation location.
Note: White spaces in the files must be correct to produce valid YAML documents.
- couchdb-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: couchdb-georedundancy
namespace: $namespace
spec:
selector:
app.kubernetes.io/name: couchdb
origin: helm-cem
ports:
- name: couchdb
port: 5984
- couchdb-route.yaml:
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: couchdb-georedundancy
namespace: $namespace
spec:
path: /
to:
kind: Service
name: couchdb-georedundancy
weight: 100
port:
targetPort: couchdb
tls:
termination: edge
insecureEdgeTerminationPolicy: None
wildcardPolicy: None
- To create the service and route, log in to site 'A' with the oc command line and run the
following commands:
oc create -f couchdb-service.yaml
oc create -f couchdb-route.yaml
- Verify the success of the previous commands by displaying the newly created
route:
oc get routes couchdb-georedundancy
Output
example:
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
couchdb-georedundancy $hostname / couchdb-georedundancy couchdb edge/None None
The
$hostname
depends on the individual setup. The value of that
variable is used later.
- Repeat the service and route creation steps for site 'B'.
- Change the default credentials. Update the default password and recreate the
release_name-couchdb-secret
secret. Where release_name is the name that you will use for
your Netcool
Operations Insight
deployment in name (OLM UI install), or metadata.name in
noi.ibm.com_noihybrids_cr.yaml (CLI install). For more information about the
release_name-couchdb-secret
secret, see Changing passwords and re-creating secrets. When you rotate
the CouchDB password, the CouchDB replication must be recreated. Run the
couchdbReplication.sh script to recreate the CouchDB replication. For more
information about the script, see Configuring replication. Secret recreation example:
oc create secret generic primary-couchdb-secret \
--from-literal=username=couchdbuser \
--from-literal=secret=couchdb \
--from-literal=password=couchdbpassword \
--namespace primary
- Restart the following CouchDB pods:
- release_name-couchdb-0
- release_name-ibm-cem-brokers-suffix
- release_name-ibm-cem-cem-users-suffix
- release_name-ibm-cem-cemmigrate-job-suffix
- release_name-ibm-cem-channelservices-suffix
- release_name-ibm-cem-eventpreprocessor-suffix
- release_name-ibm-cem-incidentprocessor-suffix
- release_name-ibm-cem-integration-controller-suffix
- release_name-ibm-cem-normalizer-suffix
- release_name-ibm-cem-notificationprocessor-suffix
- release_name-ibm-cem-rba-as-suffix
- release_name-ibm-cem-rba-rbs-suffix
- release_name-ibm-cem-scheduling-ui-suffix
- release_name-ibm-hdm-analytics-dev-collater-aggregationservice-suffix
- release_name-ibm-hdm-analytics-dev-normalizer-aggregationservicesuffix
- release_name-ibm-hdm-analytics-dev-trainer-suffix
- Use the following commands to display the CouchDB user and password.
- Fetch the CouchDB user:
SECRET_NAME=$(oc get secret | grep couchdb-secret | awk '{ print $1 }'); kubectl get secret ${SECRET_NAME} -o json | grep \"username\" | cut -d : -f2 | cut -d '"' -f2 | base64 -d
- Fetch the CouchDB password:
SECRET_NAME=$(oc get secret | grep couchdb-secret | awk '{ print $1 }'); kubectl get secret ${SECRET_NAME} -o json | grep \"password\" | cut -d : -f2 | cut -d '"' -f2 | base64 -d
Results
On completion of these steps, the CouchDB installations on both clusters should be reachable
from the workstation or management server to set up the replications. Both clusters need to be able
to reach each other through the URLs.