Configuring the Kubernetes Based Containers
values.yamlfile in Helm charts and are used to complete installation. Use the
following steps to complete this action:-
Specify parameters that need to be overridden using the
--set key=value[,key=value]argument at Helm install.Configuration Manager Examplehelm install <release-name> --set image.repository=<repo name>,\ image.tag=<image tag>,image.imageSecrets=<image pull secret>,\ secret.secretName=<CM secret name>,service.externalIP==<Service discoveryIP> \ ibm-ssp-cm-1.0.0.tgzEngine Examplehelm install <release-name> --set image.repository=<repo name>,\ image.tag=<image tag>,image.imageSecrets=<image pull secret>,\ secret.secretName=<Engine secret name>,service.externalIP==<Service discoveryIP> \ ibm-ssp-engine-1.0.0.tgzPerimeter Server (Less Secure) Examplehelm install <release-name> --set image.repository=<repo name>,\ image.tag=<image tag>,image.imageSecrets=<image pull secret>,\ service.externalIP==<Service discoveryIP>\ ibm-ssp-ps-1.0.0.tgzPerimeter Server (More Secure) Examplehelm install <release-name> --set image.repository=<repo name>,\ image.tag=<image tag>,image.imageSecrets=<image pull secret>,\ ibm-ssp-ps-1.0.0.tgz - Alternatively, provide a YAML file with values specified for these parameters when you install a
Chart. Create a copy of
values.yamlfile such as,my-values.yamland edit the values that you would like to override. Use themy-values.yamll file for installation.Configuration Manager Example
helm install <release-name> -f my-values.yaml ibm-ssp-cm-1.0.0.tgzEngine Examplehelm install <release-name> -f my-values.yaml ibm-ssp-engine-1.0.0.tgzPerimeter Server Examplehelm install <release-name> -f my-values.yaml ibm-ssp-ps-1.0.0.tgz
Configuration Manager Parameters
| Parameter | Description | Default Value |
| image.repository | Image full name including repository | |
| image.tag | Image tag | |
| image.imageSecrets | Image pull secrets | |
| image.pullPolicy | Image pull policy | Always |
| cmArgs.appUserUid | UID for container user – User UID map between the processes running inside a container and ssthe host system | 1000 |
| cmArgs.appUserGid | GID for container user - User GID map between the processes running inside a container and the host system | 1000 |
| cmArgs.keyCertExport |
Set the value "true" to Generate Key Certificate. If you are installing Secure Proxy CM first, then you need to supply the following key certificate details: keyCertFileName, keyCertAliasName keyCertStorePassphrase(secret) and keyCertEncryptPassphrase(secret). Set the value "false" to import key cert, which was exported from Secure Proxy Engine. If you have installed Secure Proxy Engine first, then you need supply the following key certificate details: keyCertFileName and keyCertEncryptPassphrase(secret). |
true |
| cmArgs.keyCertFileName |
Certificate file name. If you are installing Configuration Manager after Engine then need to create the directory with 'CM_RESOURCES' name if it is not available into PV and place the exported certificate file from Engine into ‘CM_RESOURCES’ directory. If you are installing Configuration Manager before Engine then the key certificate will generate into mounted PV directory(CM). |
defkeyCert.txt |
| cmArgs.keyCertAliasName | AliasName Certificate alias value (CM and Engine both key cert alias name must be same) | Keycert |
| cmArgs.maxHeapSize | JVM heap size - do not set more than container limit resource memory | 2048m |
| persistentVolume.enabled | To use persistent volume | true |
| persistentVolume.useDynamicProvisioning | To use storage classes to dynamically create PV | false |
| persistentVolume.storageClassName | Storage class of the PVC | manual |
| persistentVolume.size | Size of PVC volume | 2Gi |
| persistentVolume.labelName | Persistent volume label name - If you want to use any specific PV to bind with PVC then provide PV label name and value else it would be bind with any available PV. | app.kubernetes.io/name |
| persistentVolume.labelValue | Persistent volume label value | ibm-ssp-cm-pv |
| persistentVolume.accessMode | Access mode of the PVC | ReadWriteOnce |
| service.type | Kubernetes service type exposing ports | LoadBalancer |
| service.jetty.servicePort | CM web application is accessed by using below mentioned service port, so add as per the requirement else it would be set 8443 as default port. | 8443 |
| service.jetty.containerPort | If traditional jetty port is different from 8443 while migrating from traditional to container environment then in this case we need to change container port as traditional jetty port value else It is not required to change the container port value. | 8443 |
| service.cm.servicePort | CM listens on the below mentioned service port, so add as per the requirement else it would be set 62366 as default port. | 62366 |
| service.cm.containerPort | If traditional CM listen port is different from 62366 while migrating from traditional to container environment then in this case we need to change container port as traditional CM listen port value else It is not required to change the container port value. | 62366 |
| service.externalIP | External IP for service discovery | |
| secret.secretName | Secret name for Configuration Manager | |
| resources.limits.cpu | Container CPU limit | 1000m |
| resources.limits.memory | Container memory limit | 2Gi |
| resources.requests.cpu | Container CPU requested | 1000m |
| resources.requests.memory | Container Memory requested | 2Gi |
| serviceAccount.create |
Enable/disable service account creation true - Manage by helm chart false - Manage by deployment user If you are changing from true to false then in this case you need to provide service account name. It is recommended to use true value. |
true |
| serviceAccount.name | Name of Service Account to use for container | |
| affinity.nodeAffinity.required DuringSchedulingIgnoredDuring Execution | k8sPodSpec.nodeAffinity.require d DuringSchedulingIgnoredDuring Execution | |
| affinity.nodeAffinity.preferred DuringSchedulingIgnoredDuring Execution | k8sPodSpec.nodeAffinity.preferr ed DuringSchedulingIgnoredDuring Execution | |
| affinity.podAffinity.required DuringSchedulingIgnoredDuring Execution | k8s PodSpec.podAntiAffinity. requiredDuringSchedulingIgnore d | |
| DuringExecution | ||
| affinity.podAffinity.preferred DuringSchedulingIgnoredDuring Execution | k8sPodSpec.podAntiAffinity. preferredDuringScheduling IgnoredDuringExecution | |
| affinity.podAntiAffinity.required DuringSchedulingIgnoredDuring Execution | k8sPodSpec.podAntiAffinity. requiredDuringSchedulingIgnore d | |
| DuringExecution | ||
| affinity.podAntiAffinity.preferred DuringSchedulingIgnoredDuring Execution | k8sPodSpec.podAntiAffinity. preferredDuringSchedulingIgnor ed DuringExecution | |
| livenessProbe.initialDelaySecond s | Initial delay for liveness | 200 |
| livenessProbe.timeoutSeconds | Timeout for liveness | 30 |
| livenessProbe.periodSeconds | Time period for liveness | 60 |
| livenessProbe.failureThreshold | Failure threshold for liveness | 10 |
| readinessProbe.initialDelaySecon ds | Initial delays for readiness | 190 |
| readinessProbe.timeoutSeconds | Timeout for readiness | 5 |
| readinessProbe.periodSeconds | Time period for readiness | 60 |
| readinessProbe.failureThreshold | Failure threshold for readiness | 10 |
| route.enabled | Route for OpenShift Enabled/ Disabled | false |
| customProperties | Customize the properties files – The following files can be customized using key value pair
in the list format:
Add/Update:
Key and Value both must be provided in the below format: <File Name Without
Extension>_<Property Name>=<Property Value> |
|
| vmArguments | Provide the VM arguments – Add/Update VM arguments in the list format:<Key as Java property name>=<value> | |
| customFiles | Map the custom directories/files – The custom directories/files can be mapped using the list format. For mapping, we need to provide full path of the file or directory. First, we need to create the directory with 'CM_RESOURCES' name if it is not available into PV and keep the custom directories/files into created directory. | |
|
customCertificate.customCertEnabled |
Enable/disable customize common certificate |
false |
| customCertificate.commonCertFile |
Custom key store certificate file name. First, we need to create the directory with 'CM_RESOURCES' name if it is not available into PV and place the custom certificate file into ‘CM_RESOURCES’ directory. |
|
|
customCertificate.commonKeyCertFile |
Custom key cert file will be exported with this name |
|
| customCertificate.commonCertAlias | Custom certificate alias name |
Engine Parameters
| Parameter | Description | Default Value |
| image.repository | Image full name including repository | |
| image.tag | Image tag | |
| image.imageSecrets | Image pull secrets | |
| image.pullPolicy | Image pull policy | Always |
| engineArgs.appUserUid | UID for container user – User UID map between the processes running inside a container and ssthe host system | 1000 |
| engineArgs.appUserGid | GID for container user - User GID map between the processes running inside a container and the host system | 1000 |
| engineArgs.keyCertExport |
Set the value "true" to Generate Key Certificate. If you are installing Secure Proxy Engine before CM then you need to supply the following key certificate details: keyCertFileName, keyCertAliasName, keyCertStorePassphrase(secret) and keyCertEncryptPassphrase(secret). Set the value "false" to import key cert, which was exported from Secure Proxy CM. If you have installed Secure Proxy CM first then you need to supply the following key certificate details: keyCertFileName, keyCertAliasName and keyCertEncryptPassphrase(secret). |
false |
| engineArgs.keyCertFileName |
Certificate file name. If you are installing Engine after CM then you need to create the directory with 'ENG_RESOURCES' name if it is not available into PV and place the exported certificate file from CM into ‘ENG_RESOURCES’ directory. If you are installing Engine before CM then the key certificate will generate into mounted PV directory(ENGINE). |
defkeyCert.txt |
| engineArgs.keyCertAliasName | AliasName Certificate alias value (CM and Engine both key cert alias name must be the same) | Keycert |
| engineArgs.signOnDirName | Change SSP brand name if required else keep it as it is. | Signon |
| engineArgs.maxHeapSize | JVM heap size - do not set more than container limit resource memory | 2048m |
| persistentVolume.enabled | To use persistent volume | true |
| persistentVolume.useDynamicProvisioning | To use storage classes to dynamically create PV | false |
| persistentVolume.storageClassName | Storage class of the PVC | manual |
| persistentVolume.size | Size of PVC volume | 2Gi |
| persistentVolume.labelName | Persistent volume label name - If you want to use any specific PV to bind with PVC then provide PV label name and value else it would be bind with any available PV. | app.kubernetes.io/name |
| persistentVolume.labelValue | Persistent volume label value | ibm-ssp-engine -pv |
| persistentVolume.accessMode | Access mode of the PVC | ReadWriteOnce |
| service.type | Kubernetes service type exposing ports | LoadBalancer |
| service.engine.servicePort | Engine application is accessed by using below mentioned service port, so add port as per the requirement else it would be set 63366 as default port | 63366 |
| service.engine.containerPort | If traditional engine port is different from 63366 while migrating from traditional to container environment then in this case we need to change container port as traditional engine port value else It is not required to change the container port value. | 63366 |
| service.psMoreSecure.servicePort | More Secure Perimeter Server application is accessed by using below mentioned service port, so add the port number as per the requirement. The mentioned service port must be used as remote port while configuring more secure perimeter server | |
| service.adapters | Configure adapter ports if we are not using Less Secure Perimeter Server | |
| service.externalIP | External IP for service discovery | |
| secret.secretName | Secret name for Engine | |
| resources.limits.cpu | Container CPU limit | 1000m |
| resources.limits.memory | Container memory limit | 3Gi |
| resources.requests.cpu | Container CPU requested | 1000m |
| resources.requests.memory | Container Memory requested | 3Gi |
| serviceAccount.create |
Enable/disable service account creation true - Manage by helm chart false - Manage by deployment user If you are changing from true to false then in this case you need to provide service account name. It is recommended to use true value. |
true |
| serviceAccount.name | Name of Service Account to use for container | |
| affinity.nodeAffinity.required DuringSchedulingIgnoredDuring Execution | k8sPodSpec.nodeAffinity.require d DuringSchedulingIgnoredDuring Execution | |
| affinity.nodeAffinity.preferred DuringSchedulingIgnoredDuring Execution | k8sPodSpec.nodeAffinity.preferr ed DuringSchedulingIgnoredDuring Execution | |
| affinity.podAffinity.required DuringSchedulingIgnoredDuring Execution | k8s PodSpec.podAntiAffinity. requiredDuringSchedulingIgnore d | |
| DuringExecution | ||
| affinity.podAffinity.preferred DuringSchedulingIgnoredDuring Execution | k8sPodSpec.podAntiAffinity. preferredDuringScheduling IgnoredDuringExecution | |
| affinity.podAntiAffinity.required DuringSchedulingIgnoredDuring Execution | k8sPodSpec.podAntiAffinity. requiredDuringSchedulingIgnore d | |
| DuringExecution | ||
| affinity.podAntiAffinity.preferred DuringSchedulingIgnoredDuring Execution | k8sPodSpec.podAntiAffinity. preferredDuringSchedulingIgnor ed DuringExecution | |
| livenessProbe.initialDelaySecond s | Initial delay for liveness | 200 |
| livenessProbe.timeoutSeconds | Timeout for liveness | 30 |
| livenessProbe.periodSeconds | Time period for liveness | 60 |
| livenessProbe.failureThreshold | Failure threshold for liveness | 10 |
| readinessProbe.initialDelaySecon ds | Initial delays for readiness | 190 |
| readinessProbe.timeoutSeconds | Timeout for readiness | 5 |
| readinessProbe.periodSeconds | Time period for readiness | 60 |
| readinessProbe.failureThreshold | Failure threshold for readiness | 10 |
| route.enabled | Route for OpenShift Enabled/ Disabled | false |
| customProperties | Customize the properties files – The following files can be customized using key value pair
in the list format:
Add/Update:
Key and Value both must be provided in the below format: <File Name Without
Extension>_<Property Name>=<Property Value> |
|
| vmArguments | Provide the VM arguments – Add/Update VM arguments in the list format:<Key as Java property name>=<value> | |
| customFiles | Map the custom directories/files – The custom directories/files can be mapped using the list format. For mapping, we need to provide full path of the file or directory. First, we need to create the directory with 'ENG_RESOURCES' name if it is not available into PV and keep the custom directories/files into created directory. | |
|
customCertificate.customCertEnabled |
Enable/disable customize common certificate |
false |
|
customCertificate.commonKeyCertFile |
Custom key store certificate file name. First, we need to create the directory with 'ENG_RESOURCES' name if it is not available into PV and place the custom certificate file which was exported from CM custom certificate into ‘ENG_RESOURCES’ directory. |
|
| customCertificate.commonCertAlias | Custom certificate alias name |
Perimeter Server Parameters
| Parameter | Description | Default Value |
| image.repository | Image full name including repository | |
| image.tag | Image tag | |
| image.imageSecrets | Image pull secrets | |
| image.pullPolicy | Image pull policy | Always |
|
psArgs.networkZoneSecure |
true - The Perimeter Server will be installed in more-secure zone false - The Perimeter Server will be installed in less-secure zone |
false |
| psArgs.secureInterface | Secure Network Interface - The perimeter server to use to communicate with the Secure Proxy engine | * |
| psArgs.externalInterface | External Network Interface - The perimeter server to use to communicate with the backend serve / trading partners | * |
| psArgs.remote | Port Remote port number - The Secure Proxy engine will listen on for requests from the perimeter server. Required for More secure Perimeter Server | 30900 |
| psArgs.remoteAddress | Remote HostName/IP address - The Secure Proxy engine host that will be connected to this perimeter server. Required for More secure Perimeter Server | |
| psArgs.maxAllocation | Limits the amount of memory used for network buffers | 768 |
| psArgs.maxHeapSize | JVM heap size - do not set more than container limit resource memory | 1024 |
| psArgs.restricted | Set to true to enable restricted network access, controlled by restricted.policy. Required for More secure Perimeter Server | false |
| psArgs.receiveBufferSize | Socket receive buffer size for persistent connection | 131072 |
| psArgs.sendBufferSize | Socket send buffer size for persistent connection | 131072 |
| psArgs.logLevel | May have values of (ERROR,WARN,INFO,COMMTRACE,DEBUG or ALL) | ERROR |
| psArgs.rotateLogs | Enables log rotation when maxLogSize is reached | true |
| psArgs.maxLogSize | Log output will roll over when this many record have been written | 100000 |
| psArgs.maxnumLogs | After this many log are written, old logs will be deleted. | 10 |
| service.type | Kubernetes service type exposing ports | LoadBalancer |
| service.psLessSecure.servicePort | Less Secure Perimeter Server application is accessed by using below mentioned service port so add port as per the requirement else it would be set 30800 as default port. | 30800 |
| service.psLessSecure.containerPort | It is not required to change the container port value | 30800 |
| service.adapters | Configure adapter ports if we are not using Less Secure Perimeter Server | |
| service.externalIP | External IP for service discovery | |
| resources.limits.cpu | Container CPU limit | 1000m |
| resources.limits.memory | Container memory limit | 1Gi |
| resources.requests.cpu | Container CPU requested | 1000m |
| resources.requests.memory | Container Memory requested | 1Gi |
| serviceAccount.create |
Enable/disable service account creation true - Manage by helm chart false - Manage by deployment user If you are changing from true to false then in this case you need to provide service account name. It is recommended to use true value. |
true |
| serviceAccount.name | Name of Service Account to use for container | |
| affinity.nodeAffinity.required DuringSchedulingIgnoredDuring Execution | k8sPodSpec.nodeAffinity.require d DuringSchedulingIgnoredDuring Execution | |
| affinity.nodeAffinity.preferred DuringSchedulingIgnoredDuring Execution | k8sPodSpec.nodeAffinity.preferr ed DuringSchedulingIgnoredDuring Execution | |
| affinity.podAffinity.required DuringSchedulingIgnoredDuring Execution | k8s PodSpec.podAntiAffinity. requiredDuringSchedulingIgnore d | |
| DuringExecution | ||
| affinity.podAffinity.preferred DuringSchedulingIgnoredDuring Execution | k8sPodSpec.podAntiAffinity. preferredDuringScheduling IgnoredDuringExecution | |
| affinity.podAntiAffinity.required DuringSchedulingIgnoredDuring Execution | k8sPodSpec.podAntiAffinity. requiredDuringSchedulingIgnore dDuringExecution | |
| affinity.podAntiAffinity.preferred DuringSchedulingIgnoredDuring Execution | k8sPodSpec.podAntiAffinity. preferredDuringSchedulingIgnor ed DuringExecution | |
| livenessProbe.initialDelaySecond s | Initial delay for liveness | 150 |
| livenessProbe.timeoutSeconds | Timeout for liveness | 30 |
| livenessProbe.periodSeconds | Time period for liveness | 60 |
| livenessProbe.failureThreshold | Failure threshold for liveness | 10 |
| readinessProbe.initialDelaySecon ds | Initial delays for readiness | 140 |
| readinessProbe.timeoutSeconds | Timeout for readiness | 5 |
| readinessProbe.periodSeconds | Time period for readiness | 60 |
| readinessProbe.failureThreshold | Failure threshold for readiness | 10 |
| route.enabled | Route for OpenShift Enabled/ Disabled | false |
Affinity
The chart provides ways in form of node affinity, pod affinity and pod anti-affinity to configure advance pod scheduling in Kubernetes. See, Kubernetes documentation for details