Deploying IBM® Sterling Control Center Monitor
-
Create Kubernetes service account. For accessing image from secure registry, image pull secret has been created in secrets section. Now these secrets need to be added in service account resource.
Following is the sample file for service account section:apiVersion: v1 kind: ServiceAccount metadata: name: sccm-serviceaccount labels: app.kubernetes.io/name: sccm app.kubernetes.io/instance: sccm imagePullSecrets: - name: sccm-image-secret
-
Invoke the following command to create serviceaccount:
# kubectl create -f serviceaccount.yaml -n ibm-sccm
-
Now, this service account will be utilized to deploy the control center using a statefulset file.
The following are three options:-
In this sample statefulset file, it is assumed that you have created a PVC for the user input.
apiVersion: apps/v1 kind: StatefulSet metadata: name: sccm-stateful labels: app.kubernetes.io/name: sccm app.kubernetes.io/instance: sccm spec: replicas: 1 updateStrategy: type: OnDelete selector: matchLabels: app.kubernetes.io/name: sccm app.kubernetes.io/instance: sccm serviceName: sccm-service template: metadata: labels: app.kubernetes.io/name: sccm app.kubernetes.io/instance: sccm annotations: rollme: "6p0Da" productID: "6827a92f0c4447ad8685d9ef4107c949" productName: "IBM Control Center Monitor Non-Prod Certified Container" productVersion: "v6.2" productMetric: "VIRTUAL_PROCESSOR_CORE" productChargedContainers: "All" spec: serviceAccountName: sccm-serviceaccount hostNetwork: false hostPID: false hostIPC: false securityContext: fsGroup: <GID of the persistent storage location> runAsGroup: 0 runAsNonRoot: true runAsUser: 1010 supplementalGroups: - <supplemental Group id to be added in container> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - amd64 preferredDuringSchedulingIgnoredDuringExecution: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: preferredDuringSchedulingIgnoredDuringExecution: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: preferredDuringSchedulingIgnoredDuringExecution: initContainers: - name: sccm-init-secret image: <image name> imagePullPolicy: Always env: - name: ENGINE_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: DEBUG_SCRIPT value: “<true or false whether we need debugging logs or not>” command: ["/app/ccEntrypoint.sh", "populateSecret"] volumeMounts: - mountPath: /app/CC/conf name: cc-volume subPathExpr: $(ENGINE_NAME)/conf - mountPath: /app/secret_files name: scc-secret resources: limits: cpu: 500m memory: 2Gi requests: cpu: 250m memory: 1Gi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: false runAsUser: <UID to be used inside container> containers: - name: sccm-main image: "<image name>" imagePullPolicy: Always lifecycle: preStop: exec: command: ["/bin/sh", "-c", "/app/ccEntrypoint.sh preStop > /proc/1/fd/1"] env: - name: LICENSE value: "true" - name: ENGINE_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: CC_APP_INTERVAL value: "<Time interval to be used between stopping control center>" - name: DEBUG_SCRIPT value: “<true or false whether we need debugging logs or not>” - name: REBALANCE_SERVERS value: "<true or false whether rebalancing of monitored servers to be done or not >" volumeMounts: - mountPath: /app/CC/log/ name: cc-volume subPathExpr: $(ENGINE_NAME)/logs - mountPath: /app/CC/conf/ name: cc-volume subPathExpr: $(ENGINE_NAME)/conf - mountPath: /app/CC/conf-exported name: cc-volume subPathExpr: $(ENGINE_NAME)/conf-exported - mountPath: /app/CC/web/ccbase name: cc-volume subPathExpr: $(ENGINE_NAME)/ccbase - mountPath: /app/CC/packages/ name: cc-volume subPathExpr: packages - mountPath: /app/CC/web/ccbase/reports name: cc-volume subPathExpr: reports - mountPath: /app/CC/user_inputs/ name: cc-volume-user-inputs subPathExpr: user_inputs - mountPath: /app/cc_config_file name: scc-param subPath: scc_param_file - mountPath: /app/certs_secret_files name: sccm-certs-secret securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: false runAsUser: <UID to be used inside container> # Set liveness probe to determine if Control Center is running livenessProbe: initialDelaySeconds: 175 periodSeconds: 120 timeoutSeconds: 45 failureThreshold: 10 #tcpSocket: # port: 58083 exec: command: - bash - -c - /app/ccEntrypoint.sh monitor < port value given in webHttpsPort in configmap > # Set readiness probe to determine if Control Center is running readinessProbe: initialDelaySeconds: 175 periodSeconds: 120 timeoutSeconds: 15 failureThreshold: 10 #tcpSocket: # port: 58083 exec: command: - bash - -c - /app/ccEntrypoint.sh monitor < port value given in webHttpsPort in configmap > resources: limits: cpu: 3000m ephemeral-storage: 4Gi memory: 8Gi requests: cpu: 1500m ephemeral-storage: 2Gi memory: 4Gi volumes: - name: cc-volume persistentVolumeClaim: claimName: sccm-pvc-ccm - name: cc-volume-user-inputs persistentVolumeClaim: claimName: sccm-pvc-ui - name: scc-param configMap: name: sccm-cm - name: scc-secret secret: secretName: sccm-secret - name: sccm-certs-secret secret: secretName: sccm-certs-secret
-
Using wrapper image with database drivers inside this:
If wrapper image has been created with database drivers inside it, then following lines needs to be commented:- mountPath: /app/CC/user_inputs/ name: cc-volume-user-inputs subPathExpr: user_inputs
inside container’s volumemount’s section, add one more code inside volume’s section for user input volume section.- name: cc-volume-user-inputs persistentVolumeClaim: claimName: sccm-pvc-ui
-
Using the init container image, following is the sample statefulset file:
If init container image will be created with database drivers inside it, then following lines needs to be commented:- mountPath: /app/CC/user_inputs/ name: cc-volume-user-inputs subPathExpr: user_inputs
inside container’s volumemount’s section, add one more code inside volume’s section for user input volume section.- name: cc-volume-user-inputs persistentVolumeClaim: claimName: sccm-pvc-ui
Following section needs to be added inside init-containers’s section:- name: sccm-init-drivers image: <image name> imagePullPolicy: Always env: - name: ENGINE_NAME valueFrom: fieldRef: fieldPath: metadata.name command: ["/bin/sh", "-c", "cp -r /jdbc_drivers/ /app/conf"] volumeMounts: - mountPath: /app/conf name: cc-volume subPathExpr: $(ENGINE_NAME)/conf resources: limits: cpu: 500m memory: 2Gi requests: cpu: 250m memory: 1Gi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: false runAsUser: <UID to be used inside container>
Note: When using init container image, then database driver’s path will from/app/CC/conf/jdbc_drivers/<driver’s name>
inside configuration section.
-
-
After creating statefulset file, invoke the following command to create Kubernetes statefulset resource:
# kubectl create -f statefulset.yaml -n ibm-sccm
-
After deploying control center, invoke the following command to check statefulset status:
# kubectl get sts sccm-stateful -n ibm-sccm NAME READY AGE sccm-stateful 0/1 10s
-
Invoke the following command to verify the status of pods:
# kubectl get pods -n ibm-sccm NAME READY STATUS RESTARTS AGE sccm-stateful-0 0/1 Running 0 11s
-
Invoke the following command to verify the logs of the pod:
# kubectl logs -f sccm-stateful-0 -n ibm-sccm
-
After 6-7 minutes, when pod will be in ready state:
# kubectl get pods -n ibm-sccm NAME READY STATUS RESTARTS AGE sccm-stateful-0 1/1 Running 0 8m
Expose Control Center Service
Next task is to access control center service running inside the container.
- Using loadbalancer service. Below is the sample file for creating Loadbalancer
service:
# service.yaml apiVersion: v1 kind: Service metadata: name: sccm-service labels: app.kubernetes.io/name: sccm app.kubernetes.io/instance: sccm spec: selector: app.kubernetes.io/name: sccm app.kubernetes.io/instance: sccm type: LoadBalancer externalTrafficPolicy: Local ports: - name: swing-console port: <port value given in httpPort in configmap> targetPort: <port value given in httpPort in configmap> protocol: TCP - name: web-console port: <port value given in webHttpPort in configmap > targetPort: < port value given in webHttpPort in configmap > protocol: TCP - name: web-console-secure port: <port value given in webHttpsPort in configmap > targetPort: <port value given in webHttpsPort in configmap > protocol: TCP - name: swing-console-secure port: <port value given in httpsPort in configmap> targetPort: <port value given in httpsPort in configmap> protocol: TCP sessionAffinity: ClientIP
After creating this file, Invoke the following command to create a Kubernetes service resource:# kubectl create -f service.yaml -n ibm-sccm
-
Create Kubernetes ingress resource or OpenShift route in case of OpenShift environment. For creating Kubernetes ingress resource, first you need to create ClusterIP service and then ingress that will connect to service.
For creating the service, following sample file will be used:# service.yaml apiVersion: v1 kind: Service metadata: name: sccm-service labels: app.kubernetes.io/name: sccm app.kubernetes.io/instance: sccm spec: selector: app.kubernetes.io/name: sccm app.kubernetes.io/instance: sccm type: ClusetrIP ports: - name: swing-console port: <port value given in httpPort in configmap> targetPort: <port value given in httpPort in configmap> protocol: TCP - name: web-console port: <port value given in webHttpPort in configmap > targetPort: < port value given in webHttpPort in configmap > protocol: TCP - name: web-console-secure port: <port value given in webHttpsPort in configmap > targetPort: <port value given in webHttpsPort in configmap > protocol: TCP - name: swing-console-secure port: <port value given in httpsPort in configmap> targetPort: <port value given in httpsPort in configmap> protocol: TCP sessionAffinity: ClientIP
Invoke the following command to create a Kubernetes service resource:# kubectl create -f service.yaml -n ibm-sccm
Ingress
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sccm-ingress
labels:
app.kubernetes.io/name: sccm
app.kubernetes.io/instance: sccm
annotations:
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/session-cookie-path: "/"
nginx.org/ssl-services: "sccm-service"
ingress.kubernetes.io/secure-backends: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/backend-protocol: "HTTPS"
ingress.kubernetes.io/affinity: "cookie"
ingress.kubernetes.io/session-cookie-name: "route"
ingress.kubernetes.io/session-cookie-hash: "sha1"
ingress.kubernetes.io/use-regex: "true"
ingress.kubernetes.io/session-cookie-path: "/"
spec:
ingressClassName: nginx
tls:
- hosts:
- "<hostname to be used to access application>"
secretName: "sccm-tls"# TLS secret created in secret section
rules:
- host: "<hostname to be used to access application>"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: sccm-service
port:
number: <port value given in webHttpsPort in configmap >
This ingress resource example is given based on nginx ingress controller, you can use any other controller also and can modify this resource based on that like IngressClassName and the lables containing nginx.
# kubectl create -f ingress.yaml -n ibm-sccm
Route
# sccm-route.yaml
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: sccm-web
labels:
app.kubernetes.io/name: sccm
app.kubernetes.io/instance: sccm
spec:
to:
kind: Service
name: sccm-service
weight: 50
port:
targetPort: web-console-secure
tls:
termination: passthrough
insecureEdgeTerminationPolicy: None
wildcardPolicy: None
# oc create -f sccm-route.yaml -n ibm-sccm