Migrating IBM Sterling Control Center Monitor into a Kubernetes Cluster
A clustered approach must be followed to migrate an existing Control Center deployment. During the migration process, when the new Event Processor starts to run in the Kubernetes cluster, migration is said to be complete and the conventional instance can now be stopped. This means that all your configuration of the conventional Control Center application has successfully moved to the new Event Processor.
The following sections describes steps to migrate a conventional IBM® Sterling Control Center
Monitor installation into a Containerized
environment using a Kubernetes Cluster.
- Stop a running instance of IBM Control Center application.Note: All conventional Event Processor's servers must be assigned to the newly created Event Processor.
- Make all necessary configuration changes such as, database password in Secrets and
database connection details in the
configCC.properties
file.Note that Secrets and database connection details of the conventional Control Center deployment must match the Secrets and database connection details of your containerized deployment. The Port number and other configuration can differ.
Make sure you add the current Control Center's hostname (lower case-only) into the deployment file.
hostAliases: - ip: "111.22.333.441" hostnames: - "ccrdock-02k"
Sample Deployment File#Deployment apiVersion: apps/v1 kind: StatefulSet metadata: name: cc-app labels: app: cc-app tier: cc-app-kube spec: replicas: 1 selector: # Control center Pod Should contain same labels matchLabels: app: cc-app tier: cc-app-kube serviceName: cc-service updateStrategy: type: RollingUpdate template: metadata: labels: app: cc-app tier: cc-app-kube spec: hostAliases: - ip: "111.22.333.441" hostnames: - "ccrdock-02k" containers: - name: cc-container image: 111.22.333.444:5000/ibmscc:62 # docker image for control center imagePullPolicy: Always ports: - containerPort: 58080 - containerPort: 58081 - containerPort: 58082 - containerPort: 58083 env: - name: APP_USER_UID value: "4000" - name: APP_USER_GID value: "3000" - name: CC_APP_INTERVAL value: "2h" - name: LICENSE value: "accept" - name: ENGINE_NAME valueFrom: fieldRef: fieldPath: metadata.name volumeMounts: - mountPath: /app/CC/log/ name: cc-volume subPathExpr: $(ENGINE_NAME)/log - mountPath: /app/CC/conf/ name: cc-volume subPathExpr: $(ENGINE_NAME)/conf - mountPath: /app/CC/conf-exported name: cc-volume subPathExpr: $(ENGINE_NAME)/conf-exported - mountPath: /app/CC/web/ccbase name: cc-volume subPathExpr: $(ENGINE_NAME)/ccbase - mountPath: /app/CC/user_inputs/ name: cc-volume subPathExpr: $(ENGINE_NAME)/user_inputs - mountPath: /app/CC/packages/ name: cc-volume subPathExpr: packages - mountPath: /app/CC/web/ccbase/reports/ name: cc-volume subPathExpr: reports livenessProbe: initialDelaySeconds: 170 periodSeconds: 60 timeoutSeconds: 30 failureThreshold: 10 tcpSocket: port: 58083 # Set readiness probe to determine if Control Center is running readinessProbe: initialDelaySeconds: 160 periodSeconds: 60 timeoutSeconds: 5 failureThreshold: 10 tcpSocket: port: 58083 volumeClaimTemplates: - metadata: name: cc-volume spec: accessModes: [ "ReadWriteMany" ] storageClassName: "manual" resources: requests: storage: 2Gi
A new Event Processor is now created.Note: Hostname must be in lowercase, Kubernetes does not accept hostname in uppercase letters.