Migrating from an Infrastructure Management appliance to a Kubernetes (podified) installation

Follow these steps to migrate from an appliance installation to a Kubernetes (podified) installation of Infrastructure Management.

Before you begin

You must have an Infrastructure Management appliance that is installed and a containerized (podified) installation of Infrastructure Management.

Collect data from the Infrastructure Management appliance

  1. Take a backup of the database from the appliance.

    pg_dump -Fc -d <database_name> > /root/pg_dump
    

    Where <database_name> is vmdb_production by default. Find your database name by running appliance_console on your appliance.

  2. Export the encryption key and Base64 encode it for the Kubernetes Secret.

    vmdb && rails r "puts Base64.encode64(ManageIQ::Password.key.to_s)"
    
  3. Get the region number.

    vmdb && rails r "puts MiqRegion.my_region.region"
    
  4. Get the GUID of the server that you want to run as.

    vmdb && cat GUID
    

Restore the backup into the Kubernetes environment

Important:

Two options are available. You can restore the backup of the appliance into a new installation or you can restore the backup into an existing installation.

Restore the backup into a new Kubernetes based installation

For a new installation, you need to make sure you configured OIDC for integration with IBM Cloud Paks®. You must do so before you create the yaml file and continue with this procedure. Follow the steps in OIDC Configuration Steps for Infrastructure Management.

  1. Create a YAML file to define the Custom Resource (CR). Minimally you need the following.

    apiVersion: infra.management.ibm.com/v1alpha1
    kind: IMInstall
    metadata:
      name: im-iminstall
    spec:
      applicationDomain: inframgmtinstall.apps.mycluster.com
      databaseRegion: <region number from the appliance>
      serverGuid: <GUID value from appliance>
    

    Example CR

    apiVersion: infra.management.ibm.com/v1alpha1
    kind: IMInstall
    metadata:
      labels:
        app.kubernetes.io/instance: ibm-infra-management-install-operator
        app.kubernetes.io/managed-by: ibm-infra-management-install-operator
        app.kubernetes.io/name: ibm-infra-management-install-operator
      name: im-iminstall
      namespace: c4waiops
    spec:
      applicationDomain: inframgmtinstall.apps.mycluster.com
      databaseRegion: '99'
      serverGuid: <GUID value from appliance above>
      imagePullSecret: my-pull-secret
      enableSSO: true
      initialAdminGroupName: inframgmtusers
      license:
        accept: true
    
  2. Create the CR in your namespace. The operator creates several more resources and starts deploying the app.

    oc create -f <CR file name>
    
  3. Edit the secret and insert the encryption key from the appliance. Replace the "encryption-key" value with the value exported from the appliance.

    oc edit secret app-secrets -n c4waiops
    
  4. Temporarily hijack the orchestrator by adding the following to the deployment:

    oc patch deployment orchestrator -p '{"spec":{"template":{"spec":{"containers":[{"name":"orchestrator","command":["sleep", "1d"]}]}}}}' -n cp4aiops
    

    Note:

    The default timeout for oc debug is 1 minute of inactivity. Increase the idle timeout values for the client and server. Otherwise, you might have difficulty in completing the remaining steps without the debug pod timing out and forcing a restart of the entire process. For more information, see How to change idle-timeout for oc commands like oc rsh, oc logs, and oc port-forward in OCP4.

  5. Shell into the orchestrator container:

    oc rsh deploy/orchestrator
    

    Clear postgres database:

    cd /var/www/miq/vmdb
    source ./container_env
    DISABLE_DATABASE_ENVIRONMENT_CHECK=1 rake db:drop db:create
    
  6. Copy your appliance database file into the postgresql pod. You must run the command from your cluster vm, not from within the pod.

    oc cp <your_backup_file> <postgresql pod>:/var/lib/pgsql
    
  7. Run the command oc rsh into the database pod and restore the database backup.

    cd /var/lib/pgsql
    # --- you copied the backup here in step 6. ---
    pg_restore -v -d vmdb_production <your_backup_file>
    rm -f <your_backup_file>
    exit
    
  8. Back in the debug pod from step 5:

    rake db:migrate
    exit
    
  9. Delete the orchestrator deployment to remove the command override that was previously added:

    oc delete deployment orchestrator -n cp4aiops
    

The orchestrator starts deploying the rest of the pods that are required to run Infrastructure Management.

Restore the backup into an existing Kubernetes based installation

  1. For an existing installation, find the existing CR by running the command,

    oc get IMInstall -n cp4aiops
    
  2. Edit the IMInstall resource from the CLI with the details in the previous step, for example, oc edit IMInstall im-install and add the following lines under spec:

      databaseRegion: <region number from the appliance>
      serverGuid: <GUID value from appliance>
    

    OR

    Log in to Red Hat OpenShift Container Platform for your cluster. Click Home > Search. In the Resources list menu, find IMInstall. Click the IMInstall resource; then, click the YAML tab. Add the following lines under spec:

      databaseRegion: <region number from the appliance>
      serverGuid: <GUID value from appliance>
    

    Click Save

  3. Edit the secret and insert the encryption key from the appliance. Replace the "encryption-key" value with the value exported from the appliance.

    oc edit secret app-secrets -n cp4aiops
    
  4. Temporarily hijack the orchestrator by adding the following to the deployment:

    oc patch deployment orchestrator -p '{"spec":{"template":{"spec":{"containers":[{"name":"orchestrator","command":["sleep", "1d"]}]}}}}' -n cp4aiops
    

    Note: The default timeout for oc debug is 1 minute of inactivity. Increase the idle timeout values for the client and server. Otherwise, you might have difficulty in completing the remaining steps without the debug pod timing out and forcing a restart of the entire process. For more information, see How to change idle-timeout for oc commands like oc rsh, oc logs, and oc port-forward in OCP4.

  5. Shell into the orchestrator container:

    oc rsh deploy/orchestrator
    

    Clear postgres database:

    cd /var/www/miq/vmdb
    source ./container_env
    DISABLE_DATABASE_ENVIRONMENT_CHECK=1 rake db:drop db:create
    
  6. Copy your appliance database file into the postgresql pod. You must run the command from your cluster vm, not from within the pod.

    oc cp <your_backup_file> <postgresql pod>:/var/lib/pgsql
    
  7. Run the command oc rsh into the database pod and restore the database backup.

    cd /var/lib/pgsql
    # --- you copied the backup here in step 6. ---
    pg_restore -v -d vmdb_production <your_backup_file>
    rm -f <your_backup_file>
    exit
    
  8. Back in the debug pod from step 5:

    rake db:migrate
    exit
    
  9. Delete the orchestrator deployment to remove the command override that was previously added:

    oc delete deployment orchestrator -n cp4aiops
    

Important:

If your Infrastructure Management instance on your cluster was previously configured for OIDC/SSO, you need to complete these additional steps.

  1. Log in to IBM Cloud Paks®, and from the menu, click Automate infrastructure > Infrastructure Management. SSO to Infrastructure Management is expected to fail.

  2. On the Infrastructure Management login screen, log in with the default credentials, user admin, and password smartvm.

  3. In Infrastructure Management, click Settings > Application Settings. Expand Access Control and click Groups.

  4. Click Configuration > Add a new Group, and create a group with the following fields:

    • Description: <LDAP_group_name>
    • Role: EvmRole-super-administrator
    • Project/Tenant: <Your_tenant>

    Where <LDAP_group_name> is the value for initialAdminGroupName: found in the installation CR.

  5. When finished, click Add.

  6. Expand Settings and click Zones > Server: EVM. Click the Authentication tab, and ensure that the following settings are displayed:

    • Mode: External (httpd)
    • Enable Single Sign-On: checked
    • Provider Type: Enable OpenID-Connect
    • Get User Groups from External Authentication (httpd): checked
  7. Log out of Infrastructure Management and IBM Cloud Paks®. Log in to IBM Cloud Paks® by using Enterprise LDAP authentication. From the menu, click Automate infrastructure > Infrastructure Management. SSO to Infrastructure Management can now succeed.

Customizing the installation

Configuring the application domain name

It is necessary to set the applicationDomain in the CR if you are running in a production cluster. For a Code Ready Containers cluster, it might be something like miqproject.apps-crc.testing

External postgresql

Running with an external Postgres server is an option, if you want the default internal Postgres you can skip this step. Additionally, if you want to secure the connection, you need to include the optional parameters sslmode=verify-full and rootcertificate when you create the secret. To complete this process, manually create the secret and substitute your values before you create the CR.

oc create secret generic postgresql-secrets \
  --from-literal=dbname=vmdb_production \
  --from-literal=hostname=<your hostname> \
  --from-literal=port=5432 \
  --from-literal=password=<your password> \
  --from-literal=username=<your username> \
  --from-literal=sslmode=verify-full \ # optional
  --from-file=rootcertificate=path/to/optional/certificate.pem # optional

Note: If wanted, the secret name is also customizable by setting databaseSecret in the CR.

Using TLS to encrypt connections between pods inside the cluster:

Create a secret that contains all of the certificates

Certificates must be signed by a CA and that CA certificate must be in uploaded as root_crt so that it can be used to verify connection validity. If the secret is named internal-certificates-secret, no changes are needed in the CR, if you choose a different name that must be set in the internalCertificatesSecret field of the CR.

    oc create secret generic internal-certificates-secret \
      --from-file=root_crt=./certs/root.crt \
      --from-file=httpd_crt=./certs/httpd.crt \
      --from-file=httpd_key=./certs/httpd.key \
      --from-file=kafka_crt=./certs/kafka.crt \
      --from-file=kafka_key=./certs/kafka.key \
      --from-file=memcached_crt=./certs/memcached.crt \
      --from-file=memcached_key=./certs/memcached.key \
      --from-file=postgresql_crt=./certs/postgresql.crt \
      --from-file=postgresql_key=./certs/postgresql.key \
      --from-file=api_crt=./certs/api.crt \
      --from-file=api_key=./certs/api.key \
      --from-file=remote-console_crt=./certs/remote-console.crt \
      --from-file=remote-console_key=./certs/remote-console.key \
      --from-file=ui_crt=./certs/ui.crt \
      --from-file=ui_key=./certs/ui.key

Generating certificates:

A certificate authority (CA) and certificates are needed for each service. These certificates need to be valid for the internal Kubernetes service name, such as httpd or postgres. The services (UI, api) that back the route require the certificate to include a SAN with the application domain name. For example, the certificate for the UI needs to be valid for the hostname ui and your_application.apps.example.com.

Configuring external messaging

It is possible to run with an external Kafka messaging server. To do so, create the required secret with the correct parameters by using the following example and provide that secret name as kafkaSecret in the CR.

kubectl create secret generic kafka-secrets --from-literal=hostname=<your fqdn> --from-literal=username=<your username> --from-literal=password=<your password>

If you decide to use a secret name other than kafka-secrets, you need to specify that in the CR.

kafkaSecret: <name of your secret>

Creating a custom TLS secret

You can use a custom TLS certificate. To use a custom certificate create the required secret with the correct parameters by using the folowing example and provide that secret name as tlsSecret in the CR.

oc create secret tls tls-secret --cert=tls.crt --key=tls.key`

If you decide to use a secret name other than tls-secret, you need to specify that in the CR.

tlsSecret: <name of your tls secret>

Configuring an image pull secret

If authentication is required to pull the images, a secret that contains the credentials needs to be created. The following script is an example of creating the secret.

kubectl create secret docker-registry image-pull-secret --docker-username=<your registry username> --docker-password=<your registry password> --docker-server=<your registry server>

If you decide to use a secret name other than image-pull-secret, you need to specify that in the CR.

imagePullSecret: <name of your pull secret>

Configuring OpenID-Connect Authentication

To run with OpenID-Connect Authentication, customize the following example as required to fit your environment. For the following example, we tested with Keycloak version 11.0

Create a secret that contains the OpenID-Connect's Client ID and Client Secret. The values for CLIENT_ID and CLIENT_SECRET come from your authentication provider's client definition.

oc create secret generic <name of your kubernetes secret> --from-literal=CLIENT_ID=<your auth provider client ID> --from-literal=CLIENT_SECRET=<your auth provider client secret>

Modify the CR with the following values:

httpdAuthenticationType: openid-connect
oidcProviderURL: https://<your keycloak FQDN>/auth/realms/<your realm>/.well-known/openid-configuration
oidcClientSecret: <name of your kubernetes secret>

Configuring OpenID-Connect with a CA Certificate

To configure OpenID-Connect with a CA certificate, follow these steps:

Acquire your CA certificate (Review the documentation for your certificate provider for instructions) and store it in a secret by using the following example.

oc create secret generic <name of your kubernetes OIDC CA cert> --from-file=<path to your OIDC CA cert file>

Modify the CR to identify the secret created.

oidcCaCertSecret: <name of your kubernetes OIDC CA cert>

Using your own images

If you built your own custom application images and want to deploy them, you can specify the image names in the CR by using the following example.

applicationDomain: miqproject.apps-crc.testing
orchestratorImage: docker.io/<your_username_or_organization>/<your_prefix>-orchestrator:latest
baseWorkerImage: docker.io/<your_username_or_organization>/<your_prefix>-base-worker:latest
uiWorkerImage: docker.io/<your_username_or_organization>/<your_prefix>-ui-worker:latest
webserverWorkerImage: docker.io/<your_username_or_organization>/<your_prefix>-webserver-worker:latest

If you built your own operator image, you need to edit the operator deployment to specify the image.