Migrating IBM Cloud Private Managed services to Infrastructure Automation

This topic describes the process of migrating Managed services from IBM Cloud Private to Infrastructure Automation.

Most of the Managed services pods are stateless, except MongoDB pod. The MongoDB pod stores state in external volumes, that is, persistent volumes. You must do the migration for the following resources:

  • Logs

    The logs from different MongoDB pods in IBM Cloud Private are stored in persistent volumes, but in Infrastructure Automation, the pods don't use persistent volumes to store logs. The Infrastructure Automation uses system.out as default logging mechanism. It is recommended to use an external log aggregator like Kibana for the IBM Cloud Private customers.

  • Terraform binaries

    In IBM Cloud Private, the custom terraform binaries are stored in persistent volumes whereas in Infrastructure Automation, the custom terraform binaries are stored in MongoDB. If in case, you have a custom terraform binaries or want to use binary version's that is not shipped anymore as part of the Infrastructure Automation release, then you can upload using Terraform file APIs.

  • MongoDB

    The MongoDB pod contains IBM Cloud Private state. This state needs to be backed up and restored in Infrastructure Automation. For more information, see Backing up MongoDB in IBM Cloud Private and Restoring MongoDB.

Backing up MongoDB in IBM Cloud Private

  1. Stop the running pods by setting the replica count to 0 and ensure that nothing is written to the MongoDB pod or scale down the MongoDB pod to 0.

  2. Start the MongoDB pod and set replica of MongoDB pod count to 1.

  3. Switch to the service namespace.

    kubectl config set-context $(kubectl config current-context) --namespace=services
    
  4. Create a new MongoDB pod using the mongopod.yaml file and ensure to use the same labels as other pods.

    kind: Pod
    apiVersion: v1
    metadata:
      name: cam-mongo-backup
      labels:
        app.kubernetes.io/instance: cam
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: cam-manageservice
        certmanager.k8s.io/time-restarted: 2022-2-23.1227
        helm.sh/chart: manageservice-4.2.0
        name: cam-mongo-backup
        release: cam
      annotations:
        openshift.io/scc: anyuid
    spec:
      containers:
        - name: backup-mongo
          image: mongo:4.0.16
    

    Note: Get the mongo version of the targeted cluster by executing inside the mongo pod and running mongo --version. Use the same tag of mongo image while creating the backup mongo pod.

    Run kubectl apply -f mongopod.yaml to create the cam-mongo-backup pod and wait for the pod to get started.

    If you encounter with an error message like no matching repositories in ClusterImagePolicy while applying the kubectl, then edit the cluster policy and add image policy for the MongoDB pod as following:

    kubectl  edit ClusterImagePolicy image-policy
    
    -name:docker.io/mongo:*
    
  5. Get the IP address of MongoDB pod.

    MONGO_IP=$(kubectl get -n management-infrastructure-management svc/cam-mongo --no-headers | awk '{print $3}')
    
  6. Get password to connect to the MongoDB pod.

    MONGO_PASSWORD=$(kubectl get -n management-infrastructure-management secret cam-secure-values-secret -o yaml | grep mongoDbPassword: | awk '{print $2}' | base64 --decode)
    
  7. Get the username and port by describing cam-mongo pod.

    kubectl describe pod cam-mongo
    
  8. Execute the mongodump from cam-mongo-backup pod to create a zip file.

    kubectl -it exec cam-mongo-backup -- bash -c "mongodump --ssl --sslAllowInvalidCertificates --uri mongodb://MONGODB_USERNAME:$MONGO_PASSWORD@$MONGO_IP:27017/cam?ssl=true --archive=/tmp/mongo_backup.gz --gzip"
    

    Note: Verify whether the tar file is present at /tmp location in the pod.

  9. Run the following command to copy the back up archive to your local set up, for example, the /tmp/dbbackup directory

    kubectl cp cam-mongo-backup:/tmp/mongo_backup.gz /tmp/dbbackup
    
  10. Delete cam-mongo-backup pod.

    kubectl delete pod cam-mongo-backup
    
  11. Restore the replica count for the pods that you scaled down in a previous step.

Restoring MongoDB in Infrastructure Automation

The MongoDB contains Managed services state and to restore this state, do the following.

  1. Stop the running pods by setting the replica count to 0 and ensure that nothing is written to the MongoDB pod or scale down the MongoDB pod to 0.

  2. Start the MongoDB pod and set replica of MongoDB pod count to 1.

  3. Switch to the cp4aiops namespace.

    oc project cp4aiops
    
  4. Create a new MongoDB pod using the mongopod.yaml file and ensure to use the same labels as other pods:

    kind: Pod
    apiVersion: v1
    metadata:
      name: cam-mongo-backup
      labels:
        app.kubernetes.io/instance: cam
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: cam-manageservice
        certmanager.k8s.io/time-restarted: 2022-2-23.1227
        helm.sh/chart: manageservice-4.2.0
        name: cam-mongo-backup
        release: cam
      annotations:
        openshift.io/scc: anyuid
    spec:
      containers:
        - name: backup-mongo
          image: mongo:4.0.16
    

    Note: Get the mongo version of the targeted cluster by executing inside the mongo pod and running mongo --version. Use the same tag of mongo image while creating the backup mongo pod.

    Run oc apply -f mongopod.yaml to create the cam-mongo-backup pod and wait for the pod to get started.

  5. Copy the desired back up archive to restore into the cam-mongo-backup container.

    oc rsync /tmp/dbbackup/ cam-mongo-backup:/tmp/
    
  6. Get the IP address of MongoDB pod.

    MONGO_IP=$(kubectl get -n cp4aiops svc/cam-mongo --no-headers | awk '{print $3}')
    
  7. Get password to connect to the MongoDB pod.

    MONGO_PASSWORD=$(kubectl get -n cp4aiops secret cam-secure-values-secret -o yaml | grep mongoDbPassword: | awk '{print $2}' | base64 --decode)
    
  8. Get the username and port by describing cam-mongo pod.

    oc describe pod cam-momgo
    
  9. Restore the data by running the following command:

    oc -it exec cam-mongo-backup -- bash -c "mongorestore --ssl --sslAllowInvalidCertificates --uri mongodb://camuser:$MONGO_PASSWORD@$MONGO_IP:27017/cam?ssl=true --archive=/tmp/mongo_backup.gz --gzip --drop"
    
  10. Delete cam-mongo-backup pod.

    oc delete pod cam-mongo-backup
    
  11. Restore the replica count for the pods that you scaled down in a previous step.

  12. For the services that are not Globally accessible, you need to create the namespaces if they do not already exist. Create the namespaces in the cluster where you are restoring data so that the services can be accessed.