Exposing Apache CouchDB

In a Netcool® Operations Insight® environment, CouchDB is not exposed outside the cluster by default. To replicate between CouchDB installations on different clusters, a new Kubernetes Service object must be created which allocates a ClusterIP.

About this task

During an installation of Netcool Operations Insight, you can define the namespace for the installation. The following procedure requires this namespace, and it is referred to as $namespace in this procedure. You can retrieve a list of all available namespaces/projects by using the oc projects command.


  1. Use the following content to create two files that are named couchdb-service.yaml and couchdb-route.yaml on a workstation or management server. Adjust the $namespace variable to match the installation location.
    Note: White spaces in the files must be correct to produce valid YAML documents.
    1. couchdb-service.yaml:
      apiVersion: v1
      kind: Service
        name: couchdb-georedundancy
        namespace: $namespace
          app.kubernetes.io/name: couchdb
          origin: helm-cem
          - name: couchdb
            port: 5984
    2. couchdb-route.yaml:
      kind: Route
      apiVersion: route.openshift.io/v1
        name: couchdb-georedundancy
        namespace: $namespace
        path: /
          kind: Service
          name: couchdb-georedundancy
          weight: 100
          targetPort: couchdb
          termination: edge
          insecureEdgeTerminationPolicy: None
        wildcardPolicy: None
  2. To create the service and route, log in to site 'A' with the oc command line and run the following commands:
    oc create -f couchdb-service.yaml
    oc create -f couchdb-route.yaml
  3. Verify the success of the previous commands by displaying the newly created route:
    oc get routes couchdb-georedundancy
    Output example:
    NAME           HOST/PORT         PATH   SERVICES          PORT      TERMINATION   WILDCARD
    couchdb-georedundancy    $hostname   /    couchdb-georedundancy    couchdb    edge/None     None
    The $hostname depends on the individual setup. The value of that variable is used later.
  4. Repeat the service and route creation steps for site 'B'.
  5. Change the default credentials. Update the default password and recreate the release_name-couchdb-secret secret. Where release_name is the name that you will use for your Netcool Operations Insight deployment in name (OLM UI install), or metadata.name in noi.ibm.com_noihybrids_cr.yaml (CLI install). For more information about the release_name-couchdb-secret secret, see Changing passwords and re-creating secrets. When you rotate the CouchDB password, the CouchDB replication must be recreated. Run the couchdbReplication.sh script to recreate the CouchDB replication. For more information about the script, see Replication configuration.
    Secret recreation example:
    oc create secret generic primary-couchdb-secret \
          --from-literal=username=couchdbuser \
          --from-literal=secret=couchdb \
          --from-literal=password=couchdbpassword \
          --namespace primary
  6. Restart the following CouchDB pods:
    • release_name-couchdb-0
    • release_name-ibm-cem-brokers-suffix
    • release_name-ibm-cem-cem-users-suffix
    • release_name-ibm-cem-cemmigrate-job-suffix
    • release_name-ibm-cem-channelservices-suffix
    • release_name-ibm-cem-eventpreprocessor-suffix
    • release_name-ibm-cem-incidentprocessor-suffix
    • release_name-ibm-cem-integration-controller-suffix
    • release_name-ibm-cem-normalizer-suffix
    • release_name-ibm-cem-notificationprocessor-suffix
    • release_name-ibm-cem-rba-as-suffix
    • release_name-ibm-cem-rba-rbs-suffix
    • release_name-ibm-cem-scheduling-ui-suffix
    • release_name-ibm-hdm-analytics-dev-collater-aggregationservice-suffix
    • release_name-ibm-hdm-analytics-dev-normalizer-aggregationservicesuffix
    • release_name-ibm-hdm-analytics-dev-trainer-suffix
  7. Use the following commands to display the CouchDB user and password.
    1. Fetch the CouchDB user:
      SECRET_NAME=$(oc get secret | grep couchdb-secret | awk '{ print $1 }'); kubectl get secret ${SECRET_NAME} -o json | grep \"username\" | cut -d : -f2 | cut -d '"' -f2 | base64 -d
    2. Fetch the CouchDB password:
      SECRET_NAME=$(oc get secret | grep couchdb-secret | awk '{ print $1 }'); kubectl get secret ${SECRET_NAME} -o json | grep \"password\" | cut -d : -f2 | cut -d '"' -f2 | base64 -d


On completion of these steps, the CouchDB installations on both clusters should be reachable from the workstation or management server to set up the replications. Both clusters need to be able to reach each other through the URLs.