Preloading Mongo data

Before you begin

If this procedure failed previously, clean up any resources that were created from this procedure (for example, jobs and statefulsets).

This procedure copies foundational services MongoDB data from the one namespace to a new namespace that does not have foundational services installed yet. The intention is to preload the data into the new namespace in preparation for a new foundational services installation so that the new installation has the same data.

  1. Ensure that the foundational services mongodb-operator exists in the source namespace, otherwise there is no data to back up and preload.

  2. Create the following ConfigMap in the target namespace in preparation to initialize the temporary MongoDB.

    icp-mongodb-init.yaml

    ```yaml kind: ConfigMap apiVersion: v1 metadata: name: icp-mongodb-init labels:

    app.kubernetes.io/component: database
    app.kubernetes.io/instance: icp-mongodb
    app.kubernetes.io/managed-by: operator
    app.kubernetes.io/name: icp-mongodb
    app.kubernetes.io/part-of: common-services-cloud-pak
    app.kubernetes.io/version: 4.0.12-build.3
    release: mongodb
    

    data: on-start.sh: >-

    #!/bin/bash
    
    ## workaround
    https://serverfault.com/questions/713325/openshift-unable-to-write-random-state
    
    export RANDFILE=/tmp/.rnd
    
    port=27017
    
    replica_set=\$REPLICA_SET
    
    script_name=\${0##*/}
    
    credentials_file=/work-dir/credentials.txt
    
    config_dir=/data/configdb
    
   function log() {
       local msg="\$1"
       local timestamp=\$(date --iso-8601=ns)
       1>&2 echo "[\$timestamp] [\$script_name] \$msg"
       echo "[\$timestamp] [\$script_name] \$msg" >> /work-dir/log.txt
   }


   if [[ "\$AUTH" == "true" ]]; then

       if [ !  -f "\$credentials_file" ]; then
           log "Creds File Not found!"
           log "Original User: \$ADMIN_USER"
           echo \$ADMIN_USER > \$credentials_file
           echo \$ADMIN_PASSWORD >> \$credentials_file
       fi
       admin_user=\$(head -n 1 \$credentials_file)
       admin_password=\$(tail -n 1 \$credentials_file)
       admin_auth=(-u "\$admin_user" -p "\$admin_password")
       log "Original User: \$admin_user"
       if [[ "\$METRICS" == "true" ]]; then
           metrics_user="\$METRICS_USER"
           metrics_password="\$METRICS_PASSWORD"
       fi
   fi


   function shutdown_mongo() {

       log "Running fsync..."
       mongo admin "\${admin_auth[@]}" "\${ssl_args[@]}" --eval "db.adminCommand( { fsync: 1, lock: true } )"

       log "Running fsync unlock..."
       mongo admin "\${admin_auth[@]}" "\${ssl_args[@]}" --eval "db.adminCommand( { fsyncUnlock: 1 } )"

       log "Shutting down MongoDB..."
       mongo admin "\${admin_auth[@]}" "\${ssl_args[@]}" --eval "db.adminCommand({ shutdown: 1, force: true, timeoutSecs: 60 })"
   }


   #Check if Password has change and updated in mongo , if so update Creds

   function update_creds_if_changed() {
     if [ "\$admin_password" != "\$ADMIN_PASSWORD" ]; then
         passwd_changed=true
         log "password has changed = \$passwd_changed"
         log "checking if passwd  updated in mongo"
         mongo admin  "\${ssl_args[@]}" --eval "db.auth({user: '\$admin_user', pwd: '\$ADMIN_PASSWORD'})" | grep "Authentication failed"
         if [[ \$? -eq 1 ]]; then
           log "New Password worked, update creds"
           echo \$ADMIN_USER > \$credentials_file
           echo \$ADMIN_PASSWORD >> \$credentials_file
           admin_password=\$ADMIN_PASSWORD
           admin_auth=(-u "\$admin_user" -p "\$admin_password")
           passwd_updated=true
         fi
     fi
   }


   function update_mongo_password_if_changed() {
     log "checking if mongo passwd needs to be  updated"
     if [[ "\$passwd_changed" == "true" ]] && [[ "\$passwd_updated" != "true" ]]; then
       log "Updating to new password "
       if [[ \$# -eq 1 ]]; then
        mhost="--host \$1"
       else
           mhost=""
       fi

       log "host for password upd (\$mhost)"
       mongo admin \$mhost "\${admin_auth[@]}" "\${ssl_args[@]}" --eval "db.changeUserPassword('\$admin_user', '\$ADMIN_PASSWORD')" >> /work-dir/log.txt 2>&1
       sleep 10
      log "mongo passwd change attempted; check and update creds file if successful"
       update_creds_if_changed
     fi
   }




   my_hostname=\$(hostname)

   log "Bootstrapping MongoDB replica set member: \$my_hostname"


   log "Reading standard input..."

   while read -ra line; do
       log "line is  \${line}"
       if [[ "\${line}" == *"\${my_hostname}"* ]]; then
           service_name="\$line"
       fi
       peers=("\${peers[@]}" "\$line")
   done


   # Move into /work-dir

   pushd /work-dir

   pwd >> /work-dir/log.txt

   ls -l  >> /work-dir/log.txt


   # Generate the ca cert

   ca_crt=\$config_dir/tls.crt

   if [ -f \$ca_crt  ]; then
       log "Generating certificate"
       ca_key=\$config_dir/tls.key
       pem=/work-dir/mongo.pem
       ssl_args=(--ssl --sslCAFile \$ca_crt --sslPEMKeyFile \$pem)

       echo "ca stuff created" >> /work-dir/log.txt

   cat >openssl.cnf <<DUMMYEOL

   [req]

   req_extensions = v3_req

   distinguished_name = req_distinguished_name

   [req_distinguished_name]

   [ v3_req ]

   basicConstraints = CA:FALSE

   keyUsage = nonRepudiation, digitalSignature, keyEncipherment

   subjectAltName = @alt_names

   [alt_names]

   DNS.1 = \$(echo -n "\$my_hostname" | sed s/-[0-9]*\$//)

   DNS.2 = \$my_hostname

   DNS.3 = \$service_name

   DNS.4 = localhost

   DNS.5 = 127.0.0.1

   DNS.6 = mongodb

   DUMMYEOL

       # Generate the certs
       echo "cnf stuff" >> /work-dir/log.txt
       echo "genrsa " >> /work-dir/log.txt
       openssl genrsa -out mongo.key 2048 >> /work-dir/log.txt 2>&1

       echo "req " >> /work-dir/log.txt
       openssl req -new -key mongo.key -out mongo.csr -subj "/CN=\$my_hostname" -config openssl.cnf >> /work-dir/log.txt 2>&1

      echo "x509 " >> /work-dir/log.txt
       openssl x509 -req -in mongo.csr \
           -CA \$ca_crt -CAkey \$ca_key -CAcreateserial \
           -out mongo.crt -days 3650 -extensions v3_req -extfile openssl.cnf >> /work-dir/log.txt 2>&1

       echo "mongo stuff" >> /work-dir/log.txt

       rm mongo.csr

       cat mongo.crt mongo.key > \$pem
       rm mongo.key mongo.crt
   fi



   log "Peers: \${peers[@]}"


   log "Starting a MongoDB instance..."

   mongod --config \$config_dir/mongod.conf >> /work-dir/log.txt 2>&1 &

   pid=\$!

   trap shutdown_mongo EXIT



   log "Waiting for MongoDB to be ready..."

   until [[ \$(mongo "\${ssl_args[@]}" --quiet --eval
   "db.adminCommand('ping').ok") == "1" ]]; do
       log "Retrying..."
       sleep 2
   done


   log "Initialized."


   if [[ "\$AUTH" == "true" ]]; then
       update_creds_if_changed
   fi


   iter_counter=0

   while [  \$iter_counter -lt 5 ]; do
     log "primary check, iter_counter is \$iter_counter"
     # try to find a master and add yourself to its replica set.
     for peer in "\${peers[@]}"; do
         log "Checking if \${peer} is primary"
         mongo admin --host "\${peer}" --ipv6 "\${admin_auth[@]}" "\${ssl_args[@]}" --quiet --eval "rs.status()"  >> log.txt

         # Check rs.status() first since it could be in primary catch up mode which db.isMaster() doesn't show
         if [[ \$(mongo admin --host "\${peer}" --ipv6 "\${admin_auth[@]}" "\${ssl_args[@]}" --quiet --eval "rs.status().myState") == "1" ]]; then
             log "Found master \${peer}, wait while its in primary catch up mode "
             until [[ \$(mongo admin --host "\${peer}" --ipv6 "\${admin_auth[@]}" "\${ssl_args[@]}" --quiet --eval "db.isMaster().ismaster") == "true" ]]; do
                 sleep 1
             done
             primary="\${peer}"
             log "Found primary: \${primary}"
             break
         fi
     done

     if [[ -z "\${primary}" ]]  && [[ \${#peers[@]} -gt 1 ]] && (mongo "\${ssl_args[@]}" --eval "rs.status()" | grep "no replset config has been received"); then
       log "waiting before creating a new replicaset, to avoid conflicts with other replicas"
       sleep 30
     else
       break
     fi

     let iter_counter=iter_counter+1
   done



   if [[ "\${primary}" = "\${service_name}" ]]; then
       log "This replica is already PRIMARY"

   elif [[ -n "\${primary}" ]]; then

       if [[ \$(mongo admin --host "\${primary}" --ipv6 "\${admin_auth[@]}" "\${ssl_args[@]}" --quiet --eval "rs.conf().members.findIndex(m => m.host == '\${service_name}:\${port}')") == "-1" ]]; then
         log "Adding myself (\${service_name}) to replica set..."
         if (mongo admin --host "\${primary}" --ipv6 "\${admin_auth[@]}" "\${ssl_args[@]}" --eval "rs.add('\${service_name}')" | grep 'Quorum check failed'); then
             log 'Quorum check failed, unable to join replicaset. Exiting.'
             exit 1
         fi
       fi
       log "Done,  Added myself to replica set."

       sleep 3
       log 'Waiting for replica to reach SECONDARY state...'
       until printf '.'  && [[ \$(mongo admin "\${admin_auth[@]}" "\${ssl_args[@]}" --quiet --eval "rs.status().myState") == '2' ]]; do
          sleep 1
       done
       log '✓ Replica reached SECONDARY state.'

   elif (mongo "\${ssl_args[@]}" --eval "rs.status()" | grep "no replset config
   has been received"); then

       log "Initiating a new replica set with myself (\$service_name)..."

       mongo "\${ssl_args[@]}" --eval "rs.initiate({'_id': '\$replica_set', 'members': [{'_id': 0, 'host': '\$service_name'}]})"
       mongo "\${ssl_args[@]}" --eval "rs.status()"

       sleep 3

       log 'Waiting for replica to reach PRIMARY state...'

       log ' Waiting for rs.status state to become 1'
       until printf '.'  && [[ \$(mongo "\${ssl_args[@]}" --quiet --eval "rs.status().myState") == '1' ]]; do
           sleep 1
       done

       log ' Waiting for master to complete primary catchup mode'
       until [[ \$(mongo  "\${ssl_args[@]}" --quiet --eval "db.isMaster().ismaster") == "true" ]]; do
           sleep 1
       done

       primary="\${service_name}"
       log '✓ Replica reached PRIMARY state.'


       if [[ "\$AUTH" == "true" ]]; then
           # sleep a little while just to be sure the initiation of the replica set has fully
           # finished and we can create the user
           sleep 3

           log "Creating admin user..."
           mongo admin "\${ssl_args[@]}" --eval "db.createUser({user: '\$admin_user', pwd: '\$admin_password', roles: [{role: 'root', db: 'admin'}]})"
       fi

       log "Done initiating replicaset."

   fi


   log "Primary: \${primary}"


   if [[  -n "\${primary}"   && "\$AUTH" == "true" ]]; then
       # you r master and passwd has changed.. then update passwd
       update_mongo_password_if_changed \$primary

       if [[ "\$METRICS" == "true" ]]; then
           log "Checking if metrics user is already created ..."
           metric_user_count=\$(mongo admin --host "\${primary}" "\${admin_auth[@]}" "\${ssl_args[@]}" --eval "db.system.users.find({user: '\${metrics_user}'}).count()" --quiet)
           log "User count is \${metric_user_count} "
           if [[ "\${metric_user_count}" == "0" ]]; then
               log "Creating clusterMonitor user... user - \${metrics_user}  "
               mongo admin --host "\${primary}" "\${admin_auth[@]}" "\${ssl_args[@]}" --eval "db.createUser({user: '\${metrics_user}', pwd: '\${metrics_password}', roles: [{role: 'clusterMonitor', db: 'admin'}, {role: 'read', db: 'local'}]})"
               log "User creation return code is \$? "
               metric_user_count=\$(mongo admin --host "\${primary}" "\${admin_auth[@]}" "\${ssl_args[@]}" --eval "db.system.users.find({user: '\${metrics_user}'}).count()" --quiet)
               log "User count now is \${metric_user_count} "
           fi
       fi
   fi


   log "MongoDB bootstrap complete"

  exit 0

</details>

3. Create the following ConfigMap in the target namespace in preparation to install the temporary MongoDB.

   <details><summary>icp-mongodb-install.yaml</summary>

   ```yaml
   kind: ConfigMap
   apiVersion: v1
   metadata:
     name: icp-mongodb-install
     labels:
       app.kubernetes.io/component: database
       app.kubernetes.io/instance: icp-mongodb
       app.kubernetes.io/managed-by: operator
       app.kubernetes.io/name: icp-mongodb
       app.kubernetes.io/part-of: common-services-cloud-pak
       app.kubernetes.io/version: 4.0.12-build.3
       release: mongodb
   data:
     install.sh: >-
       #!/bin/bash


       # Copyright 2016 The Kubernetes Authors. All rights reserved.

       #

       # Licensed under the Apache License, Version 2.0 (the "License");

       # you may not use this file except in compliance with the License.

       # You may obtain a copy of the License at

       #

      #     http://www.apache.org/licenses/LICENSE-2.0

       #

       # Unless required by applicable law or agreed to in writing, software

       # distributed under the License is distributed on an "AS IS" BASIS,

       # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

       # See the License for the specific language governing permissions and

       # limitations under the License.


       # This volume is assumed to exist and is shared with the peer-finder

       # init container. It contains on-start/change configuration scripts.

       WORKDIR_VOLUME="/work-dir"

       CONFIGDIR_VOLUME="/data/configdb"


       for i in "\$@"

       do

       case \$i in
           -c=*|--config-dir=*)
           CONFIGDIR_VOLUME="\${i#*=}"
           shift
           ;;
           -w=*|--work-dir=*)
           WORKDIR_VOLUME="\${i#*=}"
           shift
           ;;
           *)
           # unknown option
           ;;
       esac

       done


       echo installing config scripts into "\${WORKDIR_VOLUME}"

       mkdir -p "\${WORKDIR_VOLUME}"

       cp /peer-finder "\${WORKDIR_VOLUME}"/

       echo "I am running as " \$(whoami)


       cp /configdb-readonly/mongod.conf "\${CONFIGDIR_VOLUME}"/mongod.conf

       cp /keydir-readonly/key.txt "\${CONFIGDIR_VOLUME}"/

       cp /ca-readonly/tls.key "\${CONFIGDIR_VOLUME}"/tls.key

       cp /ca-readonly/tls.crt "\${CONFIGDIR_VOLUME}"/tls.crt


       chmod 600 "\${CONFIGDIR_VOLUME}"/key.txt

       # chown -R 999:999 /work-dir

       # chown -R 999:999 /data


       # Root file system is readonly but still need write and execute access to
       tmp

       # chmod -R 777 /tmp

  1. Create a cert manager issuer resource in the target namespace:

    mongo-issuer-issuer.yaml

    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      name: god-issuer
      labels:
        app.kubernetes.io/instance: mongodbs.operator.ibm.com
        app.kubernetes.io/managed-by: mongodbs.operator.ibm.com
        app.kubernetes.io/name: mongodbs.operator.ibm.com
    spec:
      selfSigned: {}
    

  1. Create ibm-cpp-config configmap.

    ibm-cpp-config.yaml

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: ibm-cpp-config
    data:
      storageclass.default: rook-ceph-block
      storageclass.list: 'rook-ceph-block,rook-cephfs'
    

  1. Export the mongo admin user and password from the source namespace:

    export pass=$(oc get secret icp-mongodb-admin -n <source namespace> -o=jsonpath='{.data.password}')
    export user=$(oc get secret icp-mongodb-admin -n <source namespace> -o=jsonpath='{.data.user}')
    
  2. Create the icp-mongodb-admin secret in the target namespace.

    icp-mongodb-admin.yaml

    kind: Secret
    apiVersion: v1
    metadata:
      name: icp-mongodb-admin
      labels:
        app: icp-mongodb
    data:
      password: $pass
      user: $user
    type: Opaque
    

  1. Create the icp-mongodb-client-cert certificate in the target namespace.

    icp-mongodb-client-cert.yaml

    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: icp-mongodb-client-cert
    spec:
      commonName: mongodb-service
      dnsNames:
        - mongodb
      duration: 17520h
      isCA: false
      issuerRef:
        kind: Issuer
        name: mongodb-root-ca-issuer
      secretName: icp-mongodb-client-cert
    

  1. Create the icp-mongodb configmap to configure target namespace mongo deployment.

    icp-mongodb-cm.yaml

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: icp-mongodb
      labels:
        app.kubernetes.io/component: database
        app.kubernetes.io/instance: icp-mongodb
        app.kubernetes.io/managed-by: operator
        app.kubernetes.io/name: icp-mongodb
        app.kubernetes.io/part-of: common-services-cloud-pak
        app.kubernetes.io/version: 4.0.12-build.3
        release: mongodb
    data:
      mongod.conf: |-
        storage:
          dbPath: /data/db
          wiredTiger:
            engineConfig:
              cacheSizeGB: 0.26
        net:
          bindIpAll: true
          port: 27017
          ssl:
            mode: preferSSL
            CAFile: /data/configdb/tls.crt
            PEMKeyFile: /work-dir/mongo.pem
        replication:
          replSetName: rs0
        # Uncomment for TLS support or keyfile access control without TLS
        security:
          authorization: enabled
          keyFile: /data/configdb/key.txt
    

  1. Create the keyfile secret for mongo in the target namespace.

    icp-mongodb-keyfile-secret.yaml

    kind: Secret
    apiVersion: v1
    metadata:
     name: icp-mongodb-keyfile
     labels:
       app.kubernetes.io/component: database
       app.kubernetes.io/instance: icp-mongodb
       app.kubernetes.io/managed-by: operator
       app.kubernetes.io/name: icp-mongodb
       release: mongodb
    data:
     key.txt: aWNwdGVzdA==
    type: Opaque
    


  1. Create the metrics secret in the target namespace.

    icp-mongodb-metrics-secret.yaml

    kind: Secret
    apiVersion: v1
    metadata:
     name: icp-mongodb-metrics
     labels:
       app.kubernetes.io/component: database
       app.kubernetes.io/instance: icp-mongodb
       app.kubernetes.io/managed-by: operator
       app.kubernetes.io/name: icp-mongodb
       release: mongodb
    data:
     password: aWNwbWV0cmljcw==
     user: bWV0cmljcw==
    type: Opaque
    


  1. Create the RBAC for mongo in the target namespace.

    mongo-rbac.yaml

    kind: ServiceAccount
    apiVersion: v1
    metadata:
     name: ibm-mongodb-operand
     labels:
       app.kubernetes.io/instance: mongodbs.operator.ibm.com
       app.kubernetes.io/managed-by: mongodbs.operator.ibm.com
       app.kubernetes.io/name: mongodbs.operator.ibm.com
    secrets:
     - name: ibm-mongodb-operand-dockercfg-x7n5t
    imagePullSecrets:
     - name: ibm-mongodb-operand-dockercfg-x7n5t
    


  1. Create the mongo services in the target namespace.

    mongo-service.yaml

    kind: Service
    apiVersion: v1
    metadata:
     name: mongodb
     labels:
       app.kubernetes.io/component: database
       app.kubernetes.io/instance: icp-mongodb
       app.kubernetes.io/managed-by: operator
       app.kubernetes.io/name: icp-mongodb
       app.kubernetes.io/part-of: common-services-cloud-pak
       app.kubernetes.io/version: 4.0.12-build.3
       release: mongodb
    spec:
     ipFamilies:
       - IPv4
     ports:
       - protocol: TCP
         port: 27017
         targetPort: 27017
     internalTrafficPolicy: Cluster
     type: ClusterIP
     ipFamilyPolicy: SingleStack
     sessionAffinity: None
     selector:
       app: icp-mongodb
       release: mongodb
    tatus:
     loadBalancer: {}
    


mongo-service2.yaml yaml kind: Service apiVersion: v1 metadata: name: icp-mongodb labels: app.kubernetes.io/component: database app.kubernetes.io/instance: icp-mongodb app.kubernetes.io/managed-by: operator app.kubernetes.io/name: icp-mongodb app.kubernetes.io/part-of: common-services-cloud-pak app.kubernetes.io/version: 4.0.12-build.3 release: mongodb spec: clusterIP: None publishNotReadyAddresses: true ipFamilies: - IPv4 ports: - name: peer protocol: TCP port: 27017 targetPort: 27017 internalTrafficPolicy: Cluster clusterIPs: - None type: ClusterIP ipFamilyPolicy: SingleStack sessionAffinity: None selector: app: icp-mongodb release: mongodb
  1. Wait for the mongodb-backup job to complete.

    oc get pods -n <source namespace> -w | grep mongodb-backup
    
  2. If using a Power (ppc64le) or Z (s390x) cluster, scale the mongo operator back up in the source namespace.

    oc scale deploy -n <source namespace> ibm-mongodb-operator --replicas=1
    
  3. Get the volume name used in the backup.

    oc get pvc cs-mongodump -n <source namespace> -o=jsonpath='{.spec.volumeName}'
    
  4. Patch the volume to prepare it for transfer to the target namespace.

    oc patch pv <volume name> -p '{"spec": { "persistentVolumeReclaimPolicy" : "Retain" }}'
    oc patch pv <volume name> --type=merge -p '{"spec": {"claimRef":null}}'
    oc patch pv <volume name> --type json -p '[{ "op": "remove", "path": "/spec/claimRef" }]'
    
  5. Delete the cs-mongodump PVC in the source namespace.

    oc delete pvc cs-mongodump -n <source namespace> --ignore-not-found --timeout=10s
    

If the command times out, patch the PVC to remove its finalizers.

   oc patch pvc cs-mongodump -n <source namespace> --type="json" -p '[{"op": "remove", "path":"/metadata/finalizers"}]'
  1. If not running on a ROKS cluster, get the storage class name from the source namespace mongo again.

    oc get pvc mongodbdir-icp-mongodb-0 -n <source namespace> -o=jsonpath='{.spec.storageClassName}'
    
  2. Create the cs-mongodump PVC in the target namespace. Edit the namespace, storageClassName, and volumeName.

    cs-mongodump-pvc.yaml

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
     name: cs-mongodump
     namespace: <target namespace>
    spec:
     accessModes:
     - ReadWriteOnce
     resources:
       requests:
         storage: 20Gi
     storageClassName: <storage class from step 28>
     volumeMode: Filesystem
     volumeName: <volume name from step 25>
    


  1. Wait for the cs-mongodump PVC to bind in the target namespace.

    oc get pvc cs-mongodump -n <target namespace> --no-headers -w | awk '{print $2}'
    
  2. If using an amd64 cluster, apply the mongodb-restore job to the target namespace. Replace the $ibm_mongodb_image with the value from step 17.

    mongodb-restore-job-amd64.yaml

    apiVersion: batch/v1
    kind: Job
    metadata:
     name: mongodb-restore
    spec:
     parallelism: 1
     completions: 1
     backoffLimit: 20
     template:
       spec:
         containers:
         - name: icp-mongodb-restore
           image: $ibm_mongodb_image
           command: ["bash", "-c", "cat /cred/mongo-certs/tls.crt /cred/mongo-certs/tls.key > /work-dir/mongo.pem; cat /cred/cluster-ca/tls.crt /cred/cluster-ca/tls.key > /work-dir/ca.pem; mongorestore --host rs0/icp-mongodb:27017 --username \$ADMIN_USER --password \$ADMIN_PASSWORD --authenticationDatabase admin --ssl --sslCAFile /work-dir/ca.pem --sslPEMKeyFile /work-dir/mongo.pem /dump/dump"]
           resources:
             limits:
               cpu: 500m
               memory: 500Mi
             requests:
               cpu: 100m
               memory: 128Mi
           volumeMounts:
           - mountPath: "/dump"
             name: mongodump
           - mountPath: "/work-dir"
             name: tmp-mongodb
           - mountPath: "/cred/mongo-certs"
             name: icp-mongodb-client-cert
           - mountPath: "/cred/cluster-ca"
             name: cluster-ca-cert
           env:
             - name: ADMIN_USER
               valueFrom:
                 secretKeyRef:
                   name: icp-mongodb-admin
                   key: user
             - name: ADMIN_PASSWORD
               valueFrom:
                 secretKeyRef:
                   name: icp-mongodb-admin
                   key: password
         volumes:
         - name: mongodump
           persistentVolumeClaim:
             claimName: cs-mongodump
         - name: tmp-mongodb
           emptyDir: {}
         - name: icp-mongodb-client-cert
           secret:
             secretName: icp-mongodb-client-cert
         - name: cluster-ca-cert
           secret:
             secretName: mongodb-root-ca-cert
         restartPolicy: Never
    


  1. If using Power (ppc64le) or Z (s390x) cluster, apply this mongodb-restore job to the target namespace. Replace the $ibm_mongodb_image with the value from step 17.

    mongodb-restore-job-z.yaml

    apiVersion: batch/v1
    kind: Job
    metadata:
     name: mongodb-restore
    spec:
     parallelism: 1
     completions: 1
     backoffLimit: 20
     template:
       spec:
         containers:
         - name: icp-mongodb-restore
           image: $ibm_mongodb_image
           command: ["bash", "-c", "cat /cred/mongo-certs/tls.crt /cred/mongo-certs/tls.key > /work-dir/mongo.pem; cat /cred/cluster-ca/tls.crt /cred/cluster-ca/tls.key > /work-dir/ca.pem; mongorestore --host rs0/icp-mongodb:27017 --username \$ADMIN_USER --password \$ADMIN_PASSWORD --authenticationDatabase admin /dump/dump"]
           resources:
             limits:
               cpu: 500m
               memory: 500Mi
             requests:
               cpu: 100m
               memory: 128Mi
           volumeMounts:
           - mountPath: "/dump"
             name: mongodump
           - mountPath: "/work-dir"
             name: tmp-mongodb
           - mountPath: "/cred/mongo-certs"
             name: icp-mongodb-client-cert
           - mountPath: "/cred/cluster-ca"
             name: cluster-ca-cert
           env:
             - name: ADMIN_USER
               valueFrom:
                 secretKeyRef:
                   name: icp-mongodb-admin
                   key: user
             - name: ADMIN_PASSWORD
               valueFrom:
                 secretKeyRef:
                   name: icp-mongodb-admin
                   key: password
         volumes:
         - name: mongodump
           persistentVolumeClaim:
             claimName: cs-mongodump
         - name: tmp-mongodb
           emptyDir: {}
         - name: icp-mongodb-client-cert
           secret:
             secretName: icp-mongodb-client-cert
         - name: cluster-ca-cert
           secret:
             secretName: mongodb-root-ca-cert
         restartPolicy: Never
    


  1. Wait for the mongodb-restore job to complete.

    oc get pods -n <target namespace> -w | grep mongodb-restore
    
  2. Clean up mongo in the target namespace.

    oc delete statefulset icp-mongodb --ignore-not-found -n <target namespace>
    oc delete service icp-mongodb --ignore-not-found -n <target namespace>
    oc delete issuer god-issuer --ignore-not-found -n <target namespace>
    oc delete cm ibm-cpp-config --ignore-not-found -n <target namespace>
    oc delete certificate icp-mongodb-client-cert --ignore-not-found -n <target namespace>
    oc delete cm icp-mongodb --ignore-not-found -n <target namespace>
    oc delete cm icp-mongodb-init --ignore-not-found -n <target namespace>
    oc delete cm icp-mongodb-install --ignore-not-found -n <target namespace>
    oc delete secret icp-mongodb-keyfile --ignore-not-found -n <target namespace>
    oc delete secret icp-mongodb-metrics --ignore-not-found -n <target namespace>
    oc delete sa ibm-mongodb-operand --ignore-not-found -n <target namespace>
    oc delete service mongodb --ignore-not-found -n <target namespace>
    oc delete certificate mongodb-root-ca-cert --ignore-not-found -n <target namespace>
    oc delete issuer mongodb-root-ca-issuer --ignore-not-found -n <target namespace>
    oc delete cm namespace-scope --ignore-not-found -n <target namespace>
    
  3. Delete the mongodump PVC and PV.

    oc patch pv <volume name> -p '{"spec": { "persistentVolumeReclaimPolicy" : "Delete" }}'
    oc delete pvc cs-mongodump -n <target namespace> --ignore-not-found --timeout=10s
    

    If deleting the PVC times out, remove the finalizer.

    oc patch pvc cs-mongodump -n <target namespace> --type="json" -p '[{"op": "remove", "path":"/metadata/finalizers"}]'
    

    Wait for the PVC to be deleted then delete the volume.

    oc delete pv <volume name> --ignore-not-found --timeout=10s
    

    If deleting the PV times out, remove the finalizer.

    oc patch pv <volume name> --type="json" -p '[{"op": "remove", "path":"/metadata/finalizers"}]'
    
  4. Copy the secret platform-auth-idp-credentials from the source namespace to the target namespace.

    oc get secret platform-auth-idp-credentials -n <source namespace> -o yaml > auth-idp-secret.yaml
    

    Change the namespace in the secret file to the target namespace and edit to remove the creationTimestamp, resourceVersion, uid, ownerReferences, managedFields, and labels fields.

    Apply the edited secret file to the target namespace.

    oc apply -f auth-idp-secret.yaml -n <target namespace>
    
  5. Copy the secret platform-auth-ldaps-ca-cert from the source namespace to the target namespace.

    oc get secret platform-auth-ldaps-ca-cert -n <source namespace> -o yaml > auth-ldaps-secret.yaml
    

    Change the namespace in the secret file to the target namespace and edit to remove the creationTimestamp, resourceVersion, uid, ownerReferences, managedFields, and labels fields.

    Apply the edited secret file to the target namespace.

    oc apply -f auth-ldaps-secret.yaml -n <target namespace>
    
  6. Copy the configmap ibm-cpp-config from the source namespace to the target namespace.

    oc get configmap ibm-cpp-config -n <source namespace> -o yaml > ibm-cpp-config-cm.yaml
    

    Change the namespace in the secret file to the target namespace and edit to remove the creationTimestamp, resourceVersion, uid, ownerReferences, managedFields, and labels fields.

    Apply the edited secret file to the target namespace.

    oc apply -f ibm-cpp-config-cm.yaml -n <target namespace>
    
  7. Copy the configmap common-web-ui-config from the source namespace to the target namespace.

    oc get configmap common-web-ui-config -n <source namespace> -o yaml > common-web-ui-config-cm.yaml
    

    Change the namespace in the secret file to the target namespace and edit to remove the creationTimestamp, resourceVersion, uid, ownerReferences, managedFields, and labels fields.

    Apply the edited secret file to the target namespace.

    oc apply -f common-web-ui-config-cm.yaml -n <target namespace>
    
  8. Copy the configmap platform-auth-idp from the source namespace to the target namespace.

    oc get configmap platform-auth-idp -n <source namespace> -o yaml > platform-auth-idp-cm.yaml
    

    Change the namespace in the secret file to the target namespace and edit to remove the creationTimestamp, resourceVersion, uid, ownerReferences, managedFields, and labels fields.

    Apply the edited secret file to the target namespace.

    oc apply -f platform-auth-idp-cm.yaml -n <target namespace>
    
  9. Copy the commonservice CR common-service from the source namespace to the target namespace.

    oc get commonservice common-service -n <source namespace> -o yaml > commonservice-cr.yaml
    

    Change the namespace in the secret file to the target namespace, update the resourceName to preload-common-service-from-<target namespace> and edit to remove the creationTimestamp, resourceVersion, uid, ownerReferences, managedFields, and labels fields.

    Apply the edited secret file to the target namespace

    oc apply -f commonservice-cr.yaml -n <target namespace>
    
  10. Preload is complete. The target namespace is ready for a new install of foundational services using the mongo data from the source namespace.