Migrating from an Infrastructure management appliance to a Kubernetes (podified) installation
Follow these steps to migrate from an appliance installation to a Kubernetes (podified) installation of Infrastructure management.
Before you begin
You must have an Infrastructure management appliance that is installed and a containerized (podified) installation of Infrastructure management.
Collect data from the Infrastructure management appliance
-
Take a backup of the database from the appliance.
pg_dump -Fc -d <database_name> > /root/pg_dump
Where
<database_name>
isvmdb_production
by default. Find your database name by runningappliance_console
on your appliance. -
Export the encryption key and Base64 encode it for the Kubernetes Secret.
vmdb && rails r "puts Base64.encode64(ManageIQ::Password.v2_key.to_s)"
-
Get the region number.
vmdb && rails r "puts MiqRegion.my_region.region"
-
Get the GUID of the server that you want to run as.
vmdb && cat GUID
Restore the backup into the Kubernetes environment
Important:
Two options are available. You can restore the backup of the appliance into a new installation or you can restore the backup into an existing installation.
- Restore the appliance backup into a new Kubernetes based installation
- Restore the appliance backup into an existing Kubernetes based installation
Restore the backup into a new Kubernetes based installation
For a new installation, you need to make sure you configured OIDC for integration with IBM Cloud Pak for Multicloud Management 2.2. You must do so before you create the yaml file and continue with this procedure. Follow the steps in OIDC Configuration Steps for Infrastructure Management.
-
Create a YAML file to define the Custom Resource (CR). Minimally you need the following.
apiVersion: infra.management.ibm.com/v1alpha1 kind: IMInstall metadata: name: im-iminstall spec: applicationDomain: inframgmtinstall.apps.mycluster.com databaseRegion: <region number from the appliance> serverGuid: <GUID value from appliance>
Example CR
apiVersion: infra.management.ibm.com/v1alpha1 kind: IMInstall metadata: labels: app.kubernetes.io/instance: ibm-infra-management-install-operator app.kubernetes.io/managed-by: ibm-infra-management-install-operator app.kubernetes.io/name: ibm-infra-management-install-operator name: im-iminstall namespace: management-infrastructure-management spec: applicationDomain: inframgmtinstall.apps.mycluster.com databaseRegion: 99 serverGuid: <GUID value from appliance above> imagePullSecret: my-pull-secret httpdAuthenticationType: openid-connect httpdAuthConfig: imconnectionsecret enableSSO: true initialAdminGroupName: inframgmtusers license: accept: true orchestratorInitialDelay: '2400'
-
Create the CR in your namespace. The operator creates several more resources and starts deploying the app.
oc create -f <CR file name>
-
Edit the secret and insert the encryption key from the appliance. Replace the "encryption-key" value with the value exported from the appliance.
oc edit secret app-secrets -n management-infrastructure-management
-
Find the orchestrator pod and start a debug session into it. Keep this running in the background.
oc get pods -o name | grep orchestrator oc debug pod/orchestrator-123456abcd-789ef
Note:
The default timeout for
oc debug
is 1 minute of inactivity. Increase the idle timeout values for the client and server. Otherwise, you might have difficulty in completing the remaining steps without the debug pod timing out and forcing a restart of the entire process. For more information, see How to change idle-timeout for oc commands like oc rsh, oc logs, and oc port-forward in OCP4. -
Enter the command
oc edit deployment/orchestrator
and enter the following to the deployment:spec: template: spec: nodeSelector: kubernetes.io/hostname: nope
Note: This will prevent the orchestrator from starting.
-
Delete the old replica set, the new one will sit in "pending" state. For example, enter the following command:
oc delete replicaset.apps/orchestrator-123456abcd
-
Back in the debug pod from step 4:
cd /var/www/miq/vmdb source ./container_env DISABLE_DATABASE_ENVIRONMENT_CHECK=1 rake db:drop db:create
-
Copy your appliance database file into the postgresql pod. You must run the command from your cluster vm, not from within the pod.
kubectl cp <your_backup_file> <postgresql pod>:/var/lib/pgsql
-
oc rsh
into the database pod and restore the database backup.cd /var/lib/pgsql # --- you copied the backup here in step 8. --- pg_restore -d vmdb_production <your_backup_file> rm -f <your_backup_file> exit
-
Back in the debug pod from step 4:
rake db:migrate exit
-
Delete the node selector that we added previously
oc edit deployment/orchestrator
removing:nodeSelector: kubernetes.io/hostname: nope
-
Delete the pending orchestrator deployment, for example:
oc delete replicaset.apps/orchestrator-98765cba
Complete. The orchestrator starts deploying the rest of the pods that are required to run Infrastructure management.
Restore the backup into an existing Kubernetes based installation
- For an existing installation, find the existing CR by running the command,
oc get IMInstall -n management-infrastructure-management
-
Edit the
IMInstall
resource from the CLI with the details in the previous step, for example,oc edit IMInstall im-install
and add the following lines underspec:
databaseRegion: <region number from the appliance> serverGuid: <GUID value from appliance>
OR
Log in to Red Hat OpenShift Container Platform for your cluster. Click Home > Search. In the Resources list menu, find IMI IMInstall. Click the
IMInstall
resource; then, click the YAML tab. Add the following lines underspec:
databaseRegion: <region number from the appliance> serverGuid: <GUID value from appliance>
Click Save
-
Edit the secret and insert the encryption key from the appliance. Replace the "encryption-key" value with the value exported from the appliance.
oc edit secret app-secrets -n management-infrastructure-management
-
Find the orchestrator pod and start a debug session into it. Keep this running in the background.
oc get pods -o name | grep orchestrator oc debug pod/orchestrator-123456abcd-789ef
Note:
The default timeout for
oc debug
is 1 minute of inactivity. Increase the idle timeout values for the client and server. Otherwise, you might have difficulty in completing the remaining steps without the debug pod timing out and forcing a restart of the entire process. For more information, see How to change idle-timeout for oc commands like oc rsh, oc logs, and oc port-forward in OCP4. -
Enter the command
oc edit deployment/orchestrator
and enter the following to the deployment:spec: template: spec: nodeSelector: kubernetes.io/hostname: nope
Note: This will prevent the orchestrator from starting.
-
Delete the old replica set, the new one will sit in "pending" state. For example, enter the following command:
oc delete replicaset.apps/orchestrator-123456abcd
-
Back in the debug pod from step 4:
cd /var/www/miq/vmdb source ./container_env DISABLE_DATABASE_ENVIRONMENT_CHECK=1 rake db:drop db:create
- Copy your appliance database file into the postgresql pod. You must run the command from your cluster vm, not from within the pod.
kubectl cp <your_backup_file> <postgresql pod>:/var/lib/pgsql
-
oc rsh
into the database pod and restore the database backup.cd /var/lib/pgsql # --- you copied the backup here in step 8. --- pg_restore -d vmdb_production <your_backup_file> rm -f <your_backup_file> exit
-
Back in the debug pod from step 4:
rake db:migrate exit
-
Delete the node selector that we added previously
oc edit deployment/orchestrator
removing:nodeSelector: kubernetes.io/hostname: nope
-
Delete the pending orchestrator deployment, for example:
oc delete replicaset.apps/orchestrator-98765cba
Important:
If your Infrastructure management instance on your cluster was previously configured for OIDC/SSO, you need to complete these additional steps.
- Log in to IBM Cloud Pak for Multicloud Management, and from the menu, click Automate infrastructure > Infrastructure Management. SSO to Infrastructure management is expected to fail.
- On the Infrastructure management login screen, log in with the default credentials, user
admin
, and passwordsmartvm
. - In Infrastructure management, click Settings > Application Settings. Expand Access Control and click Groups.
-
Click Configuration > Add a new Group, and create a group with the following fields:
- Description
<LDAP_group_name>
- Role:
EvmRole-super-administrator
-
Project/Tenant:
<Your_tenant>
Where
<LDAP_group_name>
is the value forinitialAdminGroupName:
found in the installation CR. When finished, click Add.
- Description
-
Expand Settings and click Zones > Server: EVM. Click the Authentication tab, and ensure that the following settings are displayed:
- Mode: External (httpd)
- Enable Single Sign-On: checked
- Provider Type: Enable OpenID-Connect
- Get User Groups from External Authentication (httpd): checked
- Log out of Infrastructure management and IBM Cloud Pak for Multicloud Management. Log in to IBM Cloud Pak for Multicloud Management by using Enterprise LDAP authentication. From the menu, click Automate infrastructure > Infrastructure Management. SSO to Infrastructure management can now succeed.