Migrating the server to Kubernetes/Openshift
You can migrate your IBM® UrbanCode® Deploy server from an on-premises production installation to a containerized instance that runs in a Kubernetes or Openshift cluster.
Prerequisites:
- Open a pre-emptive guidance case with IBM UrbanCode Deploy support. This case serves as a forum for support to help you answer any questions or concerns that arise from the cloning process.
- Stop the IBM UrbanCode Deploy production server. Plan downtime for a duration of the database and application data cloning process, and have a running containerized IBM UrbanCode Deploy server instance.
Procedure:
You create a new environment by cloning the data from the current production environment and referring to the cloned data in the containerized IBM UrbanCode Deploy server installation. To migrate your IBM UrbanCode Deploy server from an on-premises site to a containerized instance follow these steps:
-
Clone the IBM UrbanCode Deploy
database.
Contact your database administrators (DBAs) to clone the database. The DBA takes a backup of the current database to a new location that can be accessed from pods running in the Kubernetes cluster. If more assistance is needed than your DBA can provide, use the support case for help.
-
Clone the application data folder.
Most application data folders have mount points to an external file system. Copy the contents of your application data folder to a new directory so that your cloned server does not use the same application data folder as your on-prem production server.Note:
- If the directory is contained in a network filesystem like NFS, you
should be able to refer to that network path when you create the
Kubernetes persistent volume resource. To create an NFS PV and PVC,
see the following example of a YAML file format.
apiVersion: v1 kind: PersistentVolume metadata: name: ucd-appdata-vol labels: volume: ucd-appdata-vol spec: capacity: storage: 20Gi accessModes: - ReadWriteOnce nfs: server: 192.168.1.17 path: /volume1/k8/ucd-appdata --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ucd-appdata-volc spec: storageClassName: "" accessModes: - "ReadWriteOnce" resources: requests: storage: 20Gi selector: matchLabels: volume: ucd-appdata-vol
- If you don't use NFS or another network file system to back up your persistent volume, copy your application data directory contents into a persistent volume in your Kubernetes cluster. The associated persistent volume claim resource name is required when you install the new IBM UrbanCode Deploy server instance.
- If the directory is contained in a network filesystem like NFS, you
should be able to refer to that network path when you create the
Kubernetes persistent volume resource. To create an NFS PV and PVC,
see the following example of a YAML file format.
-
Configure the cloned appdata.
- Ensure that the spec.persistentVolumeReclaimPolicy parameter is set to Retain on the application data persistent volume. By default, the value is Delete for dynamically created persistent volumes. Setting the value to Retain ensures that the persistent volume is not freed or deleted if its associated persistent volume claim is deleted.
- Enable debug logging by creating the
appdata/enable-debug
file. This file is required for the init and application containers to debug log messages.
-
If the production IBM UrbanCode Deploy
server is configured to use S3 storage, clone the S3 bucket and modify the
following S3 storage properties specified in the
installed.properties file, or ensure they are correct
for your cloned S3 bucket.
codestation.s3.bucket – the bucket name codestation.s3.region – the region codestation.s3.user – the API key codestation.s3.url – custom URL codestation.s3.signerOverride – signature algorithm codestation.s3.enablePathStyleAccess – true or false codestation.s3.password – API secret
-
Restart the production IBM UrbanCode Deploy
server, if required.
Note that changes you make to the production IBM UrbanCode Deploy server instance after this point are not present in the containerized IBM UrbanCode Deploy server instance that runs in the Kubernetes cluster.
-
Modify the cloned database.
- For high-availability (HA) configurations, complete these steps:
- Remove the JMS cluster configuration from the database. This
removal requires deleting all the contents of the
ds_network_relay
table. If you need assistance deleting the contents, contact your DBA. - Remove the Web Cluster configuration from the database, if
applicable. This removal requires deleting all the contents of
the
ds_server
table. If you need assistance deleting the contents, contact your DBA.Note: Web cluster configuration applies to versions starting with 7.0.0. Older versions of IBM UrbanCode Deploy do not have ads_server
table.
- Remove the JMS cluster configuration from the database. This
removal requires deleting all the contents of the
- To stop automatic component version imports, run the following SQL
command to update the components in the database:
update DS_COMPONENT set IMPORT_AUTOMATICALLY='N' where IMPORT_AUTOMATICALLY='Y';
- For high-availability (HA) configurations, complete these steps:
-
Install the IBM UrbanCode Deploy
server in the Kubernetes or Openshift cluster by following the instructions in
the Helm chart bundle README.
Note:
- If you are installing into an OpenShift cluster and using NFS for persistent storage, make sure that you use Helm chart v7.3.3 or later because the availability of support for using the supplementalGroups parameter was added to statefulset resources.
- If you are upgrading to a new version of IBM UrbanCode Deploy, manually disable any patches in the cloned appdata/patches
directory by adding the
.off
suffix to the patch file names. This is done automatically in containerized IBM UrbanCode Deploy starting with version 7.2.1.1. For best results, migrate to the same version of IBM UrbanCode Deploy.
-
Create new agents for the containerized instance or point your existing agents
to the new containerized server.
Note that the agent information that exists in the cloned database show the status as offline because the agents are configured to connect only to the on-prem production server. You can create new agents for the cloned environment. These agents can be installed on the original agent machines or VMs, and in the Kubernetes cluster. The only requirement is having network connectivity with the worker nodes that run in the Kubernetes cluster.If you don't want to containerize the existing agents, point them to the new containerized server by following the steps in the How to point an existing agent to a new IBM UrbanCode Deploy server document.