Configuring the Certified Container
The values.yaml
file in the Helm Charts contains all the
configurations required for the application.
Parameter | Description | Default Value | Release | New Description |
---|---|---|---|---|
licenseType | Specify the license edition as per the license agreement. Valid values are prod and non-prod. | prod | >=6.1.2.2 | |
global.license | Accept license | false Note: Set this parameter to true to accept
the license after you read the terms and conditions.
|
>=6.1.2 | |
global.image.repository | Repository for Sterling B2B Integrator Docker images | >=6.1 | ||
global.image.tag | Docker image tag | 6.1.0.0 | >=6.1 | |
global.image.digest | Docker image digest. Takes precedence over tag | >=6.1.0.3 | ||
global.image.pullPolicy | Pull policy for repository | IfNotPresent | >=6.1 | |
global.image.pullSecret | Pull secret for repository access | ibm-entitlement-secret | >=6.1 | |
arch | Compatible platform architecture | x86_64 | >=6.1 and <= 6.1.0.2 | |
arch.amd64 | Specify architecture (amd64, s390x) and weight to be used for scheduling | 2 - No Preference | >=6.1.0.3 | |
arch.s390x | Specify architecture (amd64, s390x) and weight to be used for scheduling | 2 - No Preference | >=6.1.0.3 | |
serviceAccount.create | Create custom defined service account | FALSE | >=6.1 | |
serviceAccount.name | Existing service account name | default | >=6.1 | |
persistence.enabled | Enable storage access to persistent volumes | TRUE | >=6.1 | |
persistence.useDynamicProvisioning | Enable dynamic provisioning of persistent volumes | FALSE | >=6.1 | |
appResourcesPVC.enabled | Enable application resource storage | TRUE | >=6.1 | |
appResourcesPVC.storageClassName | Resources persistent volume storage class name | >=6.1 | ||
appResourcesPVC.selector.label | Resources persistent volume selector label | intent | >=6.1 | |
appResourcesPVC.selector.value | Resources persistent volume selector value | resources | >=6.1 | |
appResourcesPVC.accessMode | Resources persistent volume access mode | ReadOnlyMany | >=6.1 | |
appResourcesPVC.size | Resources persistent volume storage size | 100 Mi | >=6.1 | |
appResourcesPVC.preDefinedResourcePVCName | Pre defined persistent volume claim name for resources persistent volume claim | >=6.1.2 | ||
appLogsPVC.storageClassName | Logs persistent volume storage class name | >=6.1 | ||
appLogsPVC.selector.label | Logs persistent volume selector label | intent | >=6.1 | |
appLogsPVC.selector.value | Logs persistent volume selector value | logs | >=6.1 | |
appLogsPVC.accessMode | Logs persistent volume access mode | ReadWriteMany | >=6.1 | |
appLogsPVC.size | Logs persistent volume storage size | 500Mi | >=6.1 | |
appLogsPVC.preDefinedLogsPVCName | Pre defined persistent volume claim name for logs persistent volume claim | >=6.1.2 | ||
appDocumentsPVC.enabled | Enable application document storage | FALSE | >=6.1 | |
appDocumentsPVC.storageClassName | Documents persistent volume storage class name | >=6.1 | ||
appDocumentsPVC.selector.label | Documents persistent volume selector label | intent | >=6.1 | |
appDocumentsPVC.selector.value | Documents persistent volume selector value | documents | >=6.1 | |
appDocumentsPVC.accessMode | Documents persistent volume access mode | ReadWriteMany | >=6.1 | |
appDocumentsPVC.size | Documents persistent volume storage size | 500Mi | >=6.1 | |
appDocumentsPVC.enableVolumeClaimPerPod | Enable persistent volume for documents at pod level | FALSE | >=6.1.2.2 | |
appDocumentsPVC.preDefinedDocumentPVCName | Pre defined or custom persistent volume claim name for documents persistent volume claim | >=6.1.2 | ||
extraPVCs | Extra volume claims shared across all deployments | |||
extraPVCs.enableVolumeClaimPerPod | Enable persistent volume for extraPVCs at pod level | FALSE | >=6.1.2.2 | |
extraPVCs.predefinedPVCName | Pre defined persistent volume claim name for extra persistent volume claim | >=6.1.2.2 | ||
security.supplementalGroups | Supplemental group id to access the persistent volume | 5555 | >=6.1 | |
security.fsGroup | File system group ID to access the persistent volume | 1010 | >=6.1 | |
security.fsGroupChangePolicy | File system group change policy for persistent volume | OnRootMismatch | >=6.1.2.2 | |
security.runAsUser | The User ID that needs to be run as by all containers | 1010 | >=6.1 | |
ingress.enabled | Enable ingress resource | FALSE | >=6.1 | |
ingress.controller | Ingress controller class | nginx | >=6.1 | |
ingress.annotations | Additional annotations for the ingress resource | >=6.1 | ||
ingress.port | Ingress or router port if not 80 or 443 | >=6.1 | ||
dataSetup.enabled | Enable database setup job execution | TRUE | >=6.1 | |
dataSetup.upgrade | Upgrade an older release | FALSE | >=6.1 | |
dataSetup.image.repository | Container image repository for Sterling B2B Integrator/File Gateway images. | cp.icr.io/cp/ibm-b2bi/b2bi | >=6.1.2 | |
dataSetup.image.tag | Container image tag | 6.1.2 | >=6.1.2 | |
dataSetup.image.digest | Container image digest | >=6.1.2 | ||
dataSetup.image.pullPolicy | Pull policy for container image | IfNotPresent | >=6.1.2 | |
dataSetup.image.pullSecret | Pull secret for authenticating with container image repository | global.image.pullSecret | >=6.1.2 | |
dataSetup.image.extraLabels | Extra labels | >=6.1.2.3 | ||
env.tz | Timezone for application runtime | UTC | >=6.1 | |
env.upgradeCompatibilityVerified | Indicates release upgrade compatibility verification completion | FALSE | >=6.1 | |
env.extraEnvs | Provide extra global environment variables | >=6.1.0.3 | ||
logs.enableAppLogOnConsole | Enable application logs redirection to pod console | TRUE | >=6.1 | |
integrations.seasIntegration.isEnabled | Enable Seas integration. For more information, refer to the product documentation. | FALSE | >=6.1.0.1 | |
integrations.seasIntegration.seasVersion | Seas version | '1.0' | >=6.1.0.1 | |
setupCfg.basePort | Base/initial port for the application | 50000 | >=6.1 | |
setupCfg.licenseAcceptEnableSfg | Consent for accepting license for Sterling File Gateway module | FALSE | >=6.1 | |
setupCfg.licenseAcceptEnableEbics | Consent for accepting license for EBICS module | FALSE | >=6.1 | |
setupCfg.licenseAcceptEnableFinancialServices | Consent for accepting license for EBICS client module | FALSE | >=6.1 | |
setupCfg.licenseAcceptEnableFileOperation | Consent for accepting license to enable File Operation | FALSE | >=6.1 | |
setupCfg.systemPassphraseSecret | System passphrase secret name | >=6.1 | ||
setupCfg.enableFipsMode | Enable FIPS mode | FALSE | >=6.1 | |
setupCfg.nistComplianceMode | NIST 800-131a compliance mode | off | >=6.1 | |
setupCfg.dbVendor | Database vendor - DB2/Oracle/MSSQL | >=6.1 | ||
setupCfg.dbHost | Database host | >=6.1 | ||
setupCfg.dbPort | Database port | >=6.1 | ||
setupCfg.dbUser | Database user | >=6.1 | ||
setupCfg.dbData | Database schema name | >=6.1 | ||
setupCfg.dbDrivers | Database driver jar name | >=6.1 | ||
setupCfg.dbCreateSchema | Create/update database schema on install/upgrade | TRUE | >=6.1 | |
setupCfg.oracleUseServiceName | Use service name applicable if db vendor is Oracle | FALSE | >=6.1 | |
setupCfg.usessl | Enable SSL for database connection | FALSE | >=6.1 | |
setupCfg.dbTruststore | Database SSL connection truststore file name | >=6.1 | Database truststore file name including it's path relative to the mounted resources volume location. When `dbTruststoreSecret` is mentioned, provide the name of the key holding the certificate data. - v6.1.0.1 onwards | |
setupCfg.dbTruststoreSecret | Name of the Database truststore secret containing the certificate, if applicable. | >=6.1.0.1 | ||
setupCfg.dbKeystore | Database SSL connection keystore file name | >=6.1 | Database keystore file name including it's path relative to the mounted resources volume location, if applicable. When `dbKeystoreSecret` is mentioned, provide the name of the key holding the certificate data. - v6.1.0.1 onwards | |
setupCfg.dbKeystoreSecret | Name of the Database keystore secret containing the certificate, if applicable. | >=6.1.0.1 | ||
setupCfg.dbSecret | Database user secret name | >=6.1 | ||
setupCfg.adminEmailAddress | Administrator email address | >=6.1 | ||
setupCfg.smtpHost | SMTP email server host | >=6.1 | ||
setupCfg.terminationGracePeriod | Termination grace period for Containers | 30 | >=6.1.2.2 | |
setupCfg.softStopTimeout | Timeout for soft stop | >=6.1 | ||
setupCfg.jmsVendor | JMS MQ Vendor | >=6.1 | ||
setupCfg.jmsConnectionFactory | MQ connection factory class name | >=6.1 | ||
setupCfg.jmsConnectionFactoryInstantiator | MQ connection factory creator class name | >=6.1 | ||
setupCfg.jmsQueueName | Queue name | >=6.1 | ||
setupCfg.jmsHost | MQ Server host | >=6.1 | ||
setupCfg.jmsPort | MQ Server port | >=6.1 | ||
setupCfg.jmsUser | MQ user name | >=6.1 | ||
setupCfg.jmsConnectionNameList | MQ connection name list | >=6.1 | ||
setupCfg.jmsChannel | MQ channel name | >=6.1 | ||
setupCfg.jmsEnableSsl | Enable SSL for MQ server connection | >=6.1 | ||
setupCfg.jmsKeystorePath | MQ SSL connection keystore path | >=6.1 | MQ keystore file name including it's path relative to the mounted resources volume location, if applicable. When `jmsKeystoreSecret` is mentioned, provide the name of the key holding the certificate data. - v6.1.0.1 onwards | |
setupCfg.jmsKeystoreSecret | Name of the JMS keystore secret containing the certificate, if applicable. | >6.1.0.1 | ||
setupCfg.jmsTruststorePath | MQ SSL connection truststore path | >=6.1 | MQ truststore file name including it's path relative to the mounted resources volume location, if applicable. When `jmsTruststoreSecret` is mentioned, provide the name of the key holding the certificate data. - v6.1.0.1 onwards | |
setupCfg.jmsTruststoreSecret | Name of the JMS truststore secret containing the certificate, if applicable. | >=6.1.0.1 | ||
setupCfg.jmsCiphersuite | MQ SSL connection ciphersuite | >=6.1 | ||
setupCfg.jmsProtocol | MQ SSL connection protocol | TLSv1.2 | >=6.1 | |
setupCfg.jmsSecret | MQ user secret name | >=6.1 | ||
setupCfg.libertyKeystoreLocation | Liberty API server keystore location | >=6.1 | Liberty keystore file name including it's path relative to the mounted resources volume location, if applicable. If `libertyKeystoreSecret` is mentioned, provide the name of the key holding the certificate data. - v6.1.0.1 onwards | |
setupCfg.libertyKeystoreSecret | Name of Liberty keystore secret containing the certificate, if applicable. | >=6.1.0.1 | ||
setupCfg.libertyProtocol | Liberty API server SSL connection protocol | TLSv1.2 | >=6.1 | |
setupCfg.libertySecret | Liberty API server SSL connection secret name | >=6.1 | ||
setupCfg.libertyJvmOptions | Liberty API server JVM option | >=6.1 | ||
setupCfg.updateJcePolicyFile | Enable JCE policy file update | FALSE | >=6.1 and <= 6.1.2 | |
setupCfg.jcePolicyFile | JCE policy file name | >=6.1 and <= 6.1.2 | ||
setupCfg.restartCluster | restartCluster can be set to true to restart the application cluster by cleaning-up all previous node entries, locks and set the schedules to node1. | false | >=6.1.2 | |
setupCfg.useSslForRmi | Enable SSL over RMI calls | TRUE | >=6.1.2.1 | |
setupCfg.rmiTlsSecretName | TLS secret name contains an RMI certificate and key pair. It has no value by default. Note: Do not use the Sterling B2B Integrator dashboard UI to edit or update the RMSSL certificate (rmissl). Instead, use the secret rmiTlsSecretName.
|
>=6.1.2.1 | ||
setupCfg.sapSncSecretName | Name of the secret holding SAP SNC PSE file and password along with the sapgenpse utility. | >=6.1.2.2 | ||
setupCfg.sapSncLibVendorName | SAP SNC library vendor name | >=6.1.2.2 | ||
setupCfg.sapSncLibVersion | SAP SNC library version | >=6.1.2.2 | ||
setupCfg.sapSncLibName | SAP SNC library name | >=6.1.2.2 | ||
asi.replicaCount | Application server independent (ASI) deployment replica count | 1 | >=6.1 | |
asi.seasIntegration.isEnabled | Enable SEAS integration. For more information, refer to the product documentation. | FALSE | 6.1 | |
asi.seasIntegration.seasVersion | SEAS version | 1 | 6.1 | |
asi.env.jvmOptions | JVM options for asi | >=6.1 | ||
asi.env.extraEnvs | Provide extra environment variables for ASI | >=6.1.0.3 | ||
asi.frontendService.type | Service type | NodePort | >=6.1 | |
asi.frontendService.sessionAffinityConfig.timeoutSeconds | Session affinity timeout in seconds | 10800 | >=6.1.2.2 | |
asi.frontendService.externalTrafficPolicy | Route external traffic to node-local or cluster-wide endpoints Note:
|
Cluster | >=6.1.2.2 | |
asi.frontendService.ports.http.name | Service http port name | http | >=6.1 | |
asi.frontendService.ports.http.port | Service http port number | 35000 | >=6.1 | |
asi.frontendService.ports.http.targetPort | Service target port number or name on pod | http | >=6.1 | |
asi.frontendService.ports.http.nodePort | Service node port | 30000 | >=6.1 | |
asi.frontendService.ports.http.protocol | Service port connection protocol | TCP | >=6.1 | |
asi.frontendService.ports.https.name | Service https port name | https | >=6.1 | |
asi.frontendService.ports.https.port | Service https port number | 35001 | >=6.1 | |
asi.frontendService.ports.https.targetPort | Service target port number or name on pod | https | >=6.1 | |
asi.frontendService.ports.https.nodePort | Service node port | 30001 | >=6.1 | |
asi.frontendService.ports.https.protocol | Service port connection protocol | TCP | >=6.1 | |
asi.frontendService.ports.soa.name | Service soa port name | soa | >=6.1 | |
asi.frontendService.ports.soa.port | Service soa port number | 35002 | >=6.1 | |
asi.frontendService.ports.soa.targetPort | Service target port number or name on pod | soa | >=6.1 | |
asi.frontendService.ports.soa.nodePort | Service node port | 30002 | >=6.1 | |
asi.frontendService.ports.soa.protocol | Service port connection protocol | TCP | >=6.1 | |
asi.frontendService.ports.soassl.name | Service soassl port name | soass1 | >=6.1 | |
asi.frontendService.ports.soassl.port | Service soassl port number | 35003 | >=6.1 | |
asi.frontendService.ports.soassl.targetPort | Service target port number or name on pod | soass1 | >=6.1 | |
asi.frontendService.ports.soassl.nodePort | Service node port | 30003 | >=6.1 | |
asi.frontendService.ports.soassl.protocol | Service port connection protocol | TCP | >=6.1 | |
asi.frontendService.ports.restHttpAdapter.name | Service restHttpAdapter port name | rest-adapter | >=6.1.0.1 | |
asi.frontendService.ports.restHttpAdapter.port | Service restHttpAdapter port number | 35007 | >=6.1.0.1 | |
asi.frontendService.ports.restHttpAdapter.targetPort | Service target port number or name on pod | rest-adapter | >=6.1.0.1 | |
asi.frontendService.ports.restHttpAdapter.nodePort | Service node port | 30007 | >=6.1.0.1 | |
asi.frontendService.ports.restHttpAdapter.protocol | Service port connection protocol | TCP | >=6.1.0.1 | |
asi.frontendService.extraPorts | Extra ports for service | >=6.1 | ||
asi.frontendService.loadBalancerIP | LoadBalancer IP for service | >=6.1.0.1 | ||
asi.frontendService.loadBalancerSourceRanges | LoadBalancer IP Ranges for asi service | >=6.1.2.3 | ||
asi.frontendService.annotations | Additional annotations for the asi frontendService | >=6.1.0.1 | ||
asi.backendService.type | Service type | NodePort | >=6.1 | |
asi.backendService.sessionAffinity | If you want to make sure that connections from a particular client are passed to the same pod each time, you can select the session affinity based on the client's IP addresses by setting sessionAffinity to 'ClientIP' for a service. | >=6.1.2.2 | ||
asi.backendService.sessionAffinityConfig.timeoutSeconds | Timeout after which traffic is directed to a different pod from a particular client. | 10800 | >=6.1.2.2 | |
asi.backendService.externalTrafficPolicy | This denotes if this service desires to route external traffic to node-local or cluster-wide endpoints. Cluster and local are two available options. Cluster obscures the client source IP and may cause a second hop to another node, but it should have good overall load-spreading. Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type services. Example: In the case of a passive FTP connection scenario, a particular client should be directed to the same pod for control and data connection. In the case of LoadBalancer type of service, the sessionAffinity=ClientIP configuration ensures that a particular client request will be directed to the same node (not the pod). On the node, ensure that the request is directed to the same pod by preserving the client IP. By setting externalTrafficPolicy=Local, traffic will be directed to pod running on the same node by preserving the client IP. In the case of externalTrafficPolicy=Cluster, the traffic may be directed to pod running on other node. But this obscures the source client IP, and that will break the affinity that is needed for an FTP passive connection. The above rule is applicable to NodePort type of service as well. Note: If the service type is ClusterIP, then the externaltrafficpolicy configuration is ignored.
|
Cluster | >=6.1.2.2 | |
asi.backendService.ports | Ports for service | >=6.1 | ||
asi.backendService.portRanges | Port ranges for service | >=6.1 | ||
asi.backendService.loadBalancerIP | LoadBalancer IP for service | >=6.1.0.1 | ||
asi.backendService.loadBalancerSourceRanges | LoadBalancer IP Ranges for asi service | >=6.1.2.3 | ||
asi.backendService.annotations | Additional annotations for the asi backendService | >=6.1.0.1 | ||
asi.livenessProbe.initialDelaySeconds | Livenessprobe initial delay in seconds | 60 | >=6.1 | |
asi.livenessProbe.timeoutSeconds | Livenessprobe timeout in seconds | 30 | >=6.1 | |
asi.livenessProbe.periodSeconds | Livenessprobe interval in seconds | 60 | >=6.1 | |
asi.readinessProbe.initialDelaySeconds | ReadinessProbe initial delay in seconds | 120 | >=6.1 | |
asi.readinessProbe.timeoutSeconds | ReadinessProbe timeout in seconds | 5 | >=6.1 | |
asi.readinessProbe.periodSeconds | ReadinessProbe interval in seconds | 60 | >=6.1 | |
asi.startupProbe.initialDelaySeconds | StartupProbe initial delay in seconds | 120 | >=6.1.2 | |
asi.startupProbe.timeoutSeconds | StartupProbe timeout in seconds | 30 | >=6.1.2 | |
asi.startupProbe.periodSeconds | StartupProbe interval in seconds | 60 | >=6.1.2 | |
asi.startupProbe.failureThreshold | FailureThreshold for StartupProbe | 3 | >=6.1.2 | |
asi.internalAccess.enableHttps | Enable https for internal traffic | FALSE | >=6.1 | |
asi.internalAccess.enableHttps.httpsPort | Application internal https port | >=6.1 | ||
asi.externalAccess.protocol | Protocol for application client side components to access the application | http | >=6.1 | |
asi.externalAccess.address | External address (pi address/host) for application client side components to access the application | >=6.1 | ||
asi.externalAccess.port | External port for application client side components to access the application | >=6.1 | ||
asi.ingress.internal.host | Internal Host name for ingress resource | >=6.1 | ||
asi.ingress.internal.tls.enabled | Enable TLS for ingress | FALSE | >=6.1 | |
asi.ingress.internal.tls.secretName | TLS secret name | >=6.1 | ||
asi.ingress.internal.extraPaths | Extra paths for ingress resource | >=6.1 | ||
asi.ingress.external.host | External host name for ingress resource | >=6.1 | ||
asi.ingress.external.tls.enabled | Enable TLS for ingress | FALSE | >=6.1 | |
asi.ingress.external.tls.secretName | TLS secret name | >=6.1 | ||
asi.ingress.external.extraPaths | Extra paths for ingress resource | >=6.1 | ||
asi.extraPVCs | Extra volume claims | >=6.1 | ||
asi.extraPVCs.enableVolumeClaimPerPod | Enable persistent volume for extraPVCs at pod level | FALSE | >=6.1.2.2 | |
asi.extraPVCs.predefinedPVCName | Pre defined persistent volume claim name for asi extra persistent volume claim | >=6.1.2.2 | ||
asi.extraVolumeMounts | Extra volume mounts | >=6.1 and <= 6.1.0.1 | ||
asi.extraInitContainers | Extra init containers | >=6.1 | ||
asi.resources | CPU/Memory resource requests/limits | >=6.1 | ||
asi.autoscaling.enabled | Enable autoscaling | FALSE | >=6.1 | |
asi.autoscaling.minReplicas | Minimum replicas for autoscaling | 1 | >=6.1 | |
asi.autoscaling.maxReplicas | Maximum replicas for autoscaling | 2 | >=6.1 | |
asi.autoscaling.targetCPUUtilizationPercentage | Target CPU utilization | 60 | >=6.1 | |
asi.defaultPodDisruptionBudget.enabled | Enable default pod disruption budget | FALSE | >=6.1 | |
asi.defaultPodDisruptionBudget.minAvailable | Minimum available for pod disruption budget | 1 | >=6.1 | |
asi.extraLabels | Extra labels | >=6.1 | ||
asi.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity". | >=6.1 | ||
asi.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity". | >=6.1 | ||
asi.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.podAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity". | >=6.1 | ||
asi.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.podAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity". | >=6.1 | ||
asi.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity". | >=6.1 | ||
asi.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity". | >=6.1 | ||
asi.topologySpreadConstraints | Topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. | >=6.1.0.1 | ||
asi.tolerations | Tolerations to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints | >=6.1.0.1 | ||
asi.extraSecrets | Extra secrets. `mountAsVolume` if `true`, the secrets will be mounted as a volume on `/ibm/resources/<secret-name>` folder else they will be exposed as environment variables. | >=6.1.0.1 | ||
asi.extraConfigMaps | Extra configmaps. `mountAsVolume` if `true`, the configmap will be mounted as a volume on `/ibm/resources/<configmap-name>` folder else they will be exposed as environment variables. | >=6.1.0.1 | ||
asi.myFgAccess.myFgPort | If myFG is hosted on HTTP Server adapter on ASI server, provide the internal port used while configuring that. | >=6.1.0.1 | ||
asi.myFgAccess.myFgProtocol | If myFG is hosted on HTTP Server adapter on ASI server, provide the internal protocol used while configuring that. | >=6.1.0.1 | ||
asi.hostAliases | Host aliases to be added to pod /etc/hosts | >=6.1.0.3 | ||
ac.replicaCount | Adapter Container server (ac) deployment replica count | 1 | >=6.1 | |
ac.env.jvmOptions | JVM options for ac | >=6.1 | ||
ac.env.extraEnvs | Provide extra environment variables for AC | >6.1.0.3 | ||
ac.frontendService.type | Service type | NodePort | >=6.1 | |
ac.frontendService.sessionAffinityConfig.timeoutSeconds | Session affinity timeout in seconds | 10800 | >=6.1.2.2 | |
ac.frontendService.externalTrafficPolicy | Route external traffic to node-local or cluster-wide endpoints Note:
|
Cluster | >=6.1.2.2 | |
ac.frontendService.ports.http.name | Service http port name | http | >=6.1 | |
ac.frontendService.ports.http.port | Service http port number | 35001 | >=6.1 | |
ac.frontendService.ports.http.targetPort | Service target port number or name on pod | http | >=6.1 | |
ac.frontendService.ports.http.nodePort | Service node port | 30001 | >=6.1 | |
ac.frontendService.ports.http.protocol | Service port connection protocol | TCP | >=6.1 | |
ac.frontendService.extraPorts | Extra ports for service | >=6.1 | ||
ac.frontendService.loadBalancerIP | LoadBalancer IP for service | >=6.1.0.1 | ||
ac.frontendService.loadBalancerSourceRanges | LoadBalancer IP Ranges for ac service | >=6.1.2.3 | ||
ac.frontendService.annotations | Additional annotations for the ac frontendService | >=6.1.0.1 | ||
ac.backendService.type | Service type | NodePort | >=6.1 | |
ac.backendService.sessionAffinity | If you want to make sure that connections from a particular client are passed to the same pod each time, you can select the session affinity based on the client's IP addresses by setting sessionAffinity to 'ClientIP' for a service. | >=6.1.2.2 | ||
ac.backendService.sessionAffinityConfig.timeoutSeconds |
Timeout after which traffic is directed to a different pod from a particular client. |
10800 | >=6.1.2.2 | |
ac.backendService.externalTrafficPolicy |
This denotes if this service desires to route external traffic to node-local or cluster-wide endpoints. Cluster and local are two available options. Cluster obscures the client source IP and may cause a second hop to another node, but it should have good overall load-spreading. Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type services. Example: In the case of a passive FTP connection scenario, a particular client should be directed to the same pod for control and data connection. In the case of LoadBalancer type of service, the sessionAffinity=ClientIP configuration ensures that a particular client request will be directed to the same node (not the pod). On the node, ensure that the request is directed to the same pod by preserving the client IP. By setting externalTrafficPolicy=Local, traffic will be directed to pod running on the same node by preserving the client IP. In the case of externalTrafficPolicy=Cluster, the traffic may be directed to pod running on other node. But this obscures the source client IP, and that will break the affinity that is needed for an FTP passive connection. The above rule is applicable to NodePort type of service as well. Note: If the service type is ClusterIP, then the externaltrafficpolicy configuration is ignored.
|
Cluster | >=6.1.2.2 | |
ac.backendService.ports | Ports for service | >=6.1 | ||
ac.backendService.portRanges | Port ranges for service | >=6.1 | ||
ac.backendService.loadBalancerIP | LoadBalancer IP for service | >=6.1.0.1 | ||
ac.backendService.loadBalancerSourceRanges | LoadBalancer IP Ranges for ac service | >=6.1.2.3 | ||
ac.backendService.annotations | Additional annotations for the ac backendService | >=6.1.0.1 | ||
ac.livenessProbe.initialDelaySeconds | Livenessprobe initial delay in seconds | 60 | >=6.1 | |
ac.livenessProbe.timeoutSeconds | Livenessprobe timeout in seconds | 5 | >=6.1 | |
ac.livenessProbe.periodSeconds | Livenessprobe interval in seconds | 60 | >=6.1 | |
ac.readinessProbe.initialDelaySeconds | ReadinessProbe initial delay in seconds | 120 | >=6.1 | |
ac.readinessProbe.timeoutSeconds | ReadinessProbe timeout in seconds | 5 | >=6.1 | |
ac.readinessProbe.periodSeconds | ReadinessProbe interval in seconds | 60 | >=6.1 | |
ac.ingress.internal.host | Internal Host name for ingress resource | >=6.1 | ||
ac.ingress.internal.tls.enabled | Enable TLS for ingress | FALSE | >=6.1 | |
ac.ingress.internal.tls.secretName | TLS secret name | >=6.1 | ||
ac.ingress.internal.extraPaths | Extra paths for ingress resource | >=6.1 | ||
ac.ingress.external.host | External Host name for ingress resource | >=6.1 | ||
ac.ingress.external.tls.enabled | Enable TLS for ingress | FALSE | >=6.1 | |
ac.ingress.external.tls.secretName | TLS secret name | >=6.1 | ||
ac.ingress.external.extraPaths | Extra paths for ingress resource | >=6.1 | ||
ac.extraPVCs | Extra volume claims | >=6.1 | ||
ac.extraPVCs.enableVolumeClaimPerPod | Enable persistent volume for extraPVCs at pod level | FALSE | >=6.1.2.2 | |
ac.extraPVCs.predefinedPVCName | Pre defined persistent volume claim name for ac extra persistent volume claim | >=6.1.2.2 | ||
ac.extraVolumeMounts | Extra volume mounts | >=6.1 and <= 6.1.0.1 | ||
ac.extraInitContainers | Extra init containers | >=6.1 | ||
ac.resources | CPU/Memory resource requests/limits | >=6.1 | ||
ac.autoscaling.enabled | Enable autoscaling | FALSE | >=6.1 | |
ac.autoscaling.minReplicas | Minimum replicas for autoscaling | 1 | >=6.1 | |
ac.autoscaling.maxReplicas | Maximum replicas for autoscaling | 2 | >=6.1 | |
ac.autoscaling.targetCPUUtilizationPercentage | Target CPU utilization | 60 | >=6.1 | |
ac.defaultPodDisruptionBudget.enabled | Enable default pod disruption budget | FALSE | >=6.1 | |
ac.defaultPodDisruptionBudget.minAvailable | Minimum available for pod disruption budget | 1 | >=6.1 | |
ac.extraLabels | Extra labels | >=6.1 | ||
ac.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution. | >=6.1 | ||
ac.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution. | >=6.1 | ||
ac.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.podAffinity.requiredDuringSchedulingIgnoredDuringExecution. | >=6.1 | ||
ac.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | >=6.1 | ||
ac.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution. | >=6.1 | ||
ac.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution. | >=6.1 | ||
ac.topologySpreadConstraints | Topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. | >=6.1.0.1 | ||
ac.tolerations | Tolerations to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints | >=6.1.0.1 | ||
ac.extraSecrets | Extra secrets. `mountAsVolume` if `true`, the secrets will be mounted as a volume on `/ibm/resources/<secret-name>` folder else they will be exposed as environment variables. | >=6.1.0.1 | ||
ac.extraConfigMaps | Extra configmaps. `mountAsVolume` if `true`, the configmap will be mounted as a volume on `/ibm/resources/<configmap-name>` folder else they will be exposed as environment variables. | >=6.1.0.1 | ||
ac.myFgAccess.myFgPort | If myFG is hosted on HTTP Server adapter on AC server, provide the internal port used while configuring that. | >=6.1.0.1 | ||
ac.myFgAccess.myFgProtocol | If myFG is hosted on HTTP Server adapter on AC server, provide the internal protocol used while configuring that. | >=6.1.0.1 | ||
ac.hostAliases | Host aliases to be added to pod /etc/hosts | >=6.1.0.3 | ||
api.replicaCount | Liberty API server (API) deployment replica count | 1 | >=6.1 | |
api.env.jvmOptions | JVM options for api | >=6.1 | ||
api.env.extraEnvs | Provide extra environment variables for API | >=6.1.0.3 | ||
api.frontendService.type | Service type | `NodePort` | >=6.1 | |
api.frontendService.sessionAffinityConfig.timeoutSeconds | Session affinity timeout in seconds | 10800 | >=6.1.2.2 | |
api.frontendService.externalTrafficPolicy | Route external traffic to node-local or cluster-wide endpoints Note:
|
Cluster | >=6.1.2.2 | |
api.frontendService.ports.http.name | Service http port name | http | >=6.1 | |
api.frontendService.ports.http.port | Service http port number | 35002 | >=6.1 | |
api.frontendService.ports.http.targetPort | Service target port number or name on pod | http | >=6.1 | |
api.frontendService.ports.http.nodePort | Service node port | 30002 | >=6.1 | |
api.frontendService.ports.http.protocol | Service port connection protocol | TCP | >=6.1 | |
api.frontendService.ports.https.name | Service http port name | https | >=6.1 | |
api.frontendService.ports.https.port | Service http port number | 35003 | >=6.1 | |
api.frontendService.ports.https.targetPort | Service target port number or name on pod | https | >=6.1 | |
api.frontendService.ports.https.nodePort | Service node port | 30003 | >=6.1 | |
api.frontendService.ports.https.protocol | Service port connection protocol | TCP | >=6.1 | |
api.frontendService.extraPorts | Extra ports for service | >=6.1 | ||
api.frontendService.loadBalancerIP | LoadBalancer IP for service | >=6.1.0.1 | ||
api.frontendService.loadBalancerSourceRanges | LoadBalancer IP Ranges for api service | >=6.1.2.3 | ||
api.frontendService.annotations | Additional annotations for the api frontendService | >=6.1.0.1 | ||
api.livenessProbe.initialDelaySeconds | Livenessprobe initial delay in seconds | 120 | >=6.1 | |
api.livenessProbe.timeoutSeconds | Livenessprobe timeout in seconds | 5 | >=6.1 | |
api.livenessProbe.periodSeconds | Livenessprobe interval in seconds | 60 | >=6.1 | |
api.readinessProbe.initialDelaySeconds | ReadinessProbe initial delay in seconds | 120 | >=6.1 | |
api.readinessProbe.timeoutSeconds | ReadinessProbe timeout in seconds | 5 | >=6.1 | |
api.readinessProbe.periodSeconds | ReadinessProbe interval in seconds | 60 | >=6.1 | |
api.internalAccess.enableHttps | Enable https for internal traffic | FALSE | >=6.1 | |
api.externalAccess.protocol | Protocol for application client side components to access the application | http | >=6.1 | |
api.externalAccess.address | External address (ip/host) for application client side components to access the application | >=6.1 | ||
api.externalAccess.port | External port for application client side components to access the application | >=6.1 | ||
api.ingress.internal.host | Internal Host name for ingress resource | >=6.1 | ||
api.ingress.internal.tls.enabled | Enable TLS for ingress | FALSE | >=6.1 | |
api.ingress.internal.tls.secretName | TLS secret name | >=6.1 | ||
api.extraPVCs | Extra volume claims | >=6.1 | ||
api.extraPVCs.enableVolumeClaimPerPod | Enable persistent volume for extraPVCs at pod level | FALSE | >=6.1.2.2 | |
api.extraPVCs.predefinedPVCName | Pre defined persistent volume claim name for api extra persistent volume claim | >=6.1.2.2 | ||
api.extraVolumeMounts | Extra volume mounts | >=6.1 and <= 6.1.0.1 | ||
api.extraInitContainers | Extra init containers | >=6.1 | ||
api.resources | CPU/Memory resource requests/limits | >=6.1 | ||
api.defaultPodDisruptionBudget.enabled | Enable default pod disruption budget | FALSE | >=6.1 | |
api.defaultPodDisruptionBudget.minAvailable | Minimum available for pod disruption budget | 1 | >=6.1 | |
api.extraLabels | Extra labels | >=6.1 | ||
api.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution. | >=6.1 | ||
api.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution. | >=6.1 | ||
api.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.podAffinity.requiredDuringSchedulingIgnoredDuringExecution. | >=6.1 | ||
api.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.podAffinity.preferredDuringSchedulingIgnoredDuringExecution. | >=6.1 | ||
api.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution. | >=6.1 | ||
api.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution. | >=6.1 | ||
api.topologySpreadConstraints | Topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. | >=6.1.0.1 | ||
api.tolerations | Tolerations to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints | >=6.1.0.1 | ||
api.extraSecrets | Extra secrets. `mountAsVolume` if `true`, the secrets will be mounted as a volume on `/ibm/resources/<secret-name>` folder else they will be exposed as environment variables. | >=6.1.0.1 | ||
api.extraConfigMaps | Extra configmaps. `mountAsVolume` if `true`, the configmap will be mounted as a volume on `/ibm/resources/<configmap-name>` folder else they will be exposed as environment variables. | >=6.1.0.1 | ||
api.hostAliases | Host aliases to be added to pod /etc/hosts | >=6.1.0.3 | ||
nameOverride | Chart resource short name override | >=6.1 | ||
fullnameOverride | Chart resource full name override | >=6.1 | ||
dashboard.enabled | Enable sample Grafana dashboard | FALSE | >=6.1 | |
test.image.repository | Repository for Docker image used for Helm test and cleanup | cp.icr.io/cp | >=6.1 | |
test.image.name | Helm test and cleanup Docker image name | opencontent-common-utils | >=6.1 | |
test.image.tag | Helm test and cleanup Docker image tag | 1.1.60 | >=6.1 | |
test.image.digest | helm test and cleanup docker image digest. Takes precedence over tag | sha256:6a514b98fe8f006d00a4bbbcc87241d900ac6a6f28f035d89a61a47aa7af25c7 | >=6.1.0.3 | |
test.image.pullPolicy | Pull policy for Helm test image repository | IfNotPresent | >=6.1 | |
test.image.extraLabels | Extra labels | >=6.1.2.3 | ||
purge.enabled | Enable external purge job | FALSE | >=6.1 | |
purge.image.repository | External purge docker image repository | purge | >=6.1 | |
purge.image.tag | External purge image tag | 6.1 | >=6.1 | |
purge.image.digest | External purge image digest. Takes precedence over tag | >=6.1.0.3 | ||
purge.image.pullPolicy | Pull policy for external purge docker image | IfNotPresent | >=6.1 | |
purge.image.pullSecret | Pull secret for repository access | global.image.pullSecret | >=6.1 | |
purge.extraLabels | Extra labels | >=6.1.2.3 | ||
purge.schedule | External purge job creation and execution schedule. Its a Cron format string such as 1 * * * * or @hourly as schedule day/time. Refer to Kubernetes documentation for further details on Cron string for schedule. Specify the schedule value in quotes. | >=6.1 | ||
purge.startingDeadlineSeconds | Deadline in seconds for starting the job if it misses its scheduled time for any reason | >=6.1 | ||
purge.activeDeadlineSeconds | Duration in seconds of the external purge job that is running. Once the job reaches activeDeadlineSeconds, the external purge stops and job is marked as Completed. | >=6.1 | ||
purge.concurrencyPolicy | Specifies behavior for concurrent execution of external purge job. Valid values are Forbid - concurrent jobs are not allowed and Replace - If it is time for the new job run and previous job has not finished yet, the new job will replace the currently running job | Forbid | >=6.1 | |
purge.suspend | If it is set to true, all subsequent executions are suspended. This setting does not apply to already started executions. | FALSE | >=6.1 | |
purge.successfulJobsHistoryLimit | Specify how many completed external purge jobs should be kept in history | 3 | >=6.1 | |
purge.failedJobsHistoryLimit | Specify how many failed external purge jobs should be kept in history | 1 | >=6.1 | |
purge.env.jvmOptions | JVM options for purge | >=6.1 | ||
purge.env.extraEnvs | Provide extra environment variables for Purge Job | >=6.1.0.3 | ||
purge.resources | CPU/Memory resource requests/limits for the external purge job pod | 1 CPU and 2Gi Memory | >=6.1 | |
purge.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity". | >=6.1 | ||
purge.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | k8s PodSpec.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution. Refer section "Affinity". | >=6.1 | ||
resourcesInit.enabled | Enable resource init containers | False | >=6.1.2.0 | |
resourcesInit.image.repository | Repository for resource init container images | >=6.1.2.1 | cp.icr.io/cp/ibm-b2bi/ | |
resourcesInit.image.name | Docker image name | >=6.1.2.1 | b2bi-resources | |
resourcesInit.image.tag | Docker image tag | >=6.1.2.1 | 6.1.2.x | |
resourcesInit.image.digest | Docker image digest. Precedes the docker image tag. | sha256:660f8b8a48985d2981dc1bb31b9667aabfe4b8829221a8e48e64e3de01eaed08 | >=6.1.2.1 | |
resourcesInit.image.pullPolicy | Pull policy for the repository | IfNotPresent | >=6.1.2.0 | |
resourcesInit.command | Command to be executed in the resource init container. | >=6.1.2.0 |
values.yaml
file with only specific configuration parameters, while
retaining the yaml
structure, for which a value needs to be provided or
overridden. The custom values (say my_values.yaml
) file can then be
provided with the -f option to the helm install
command.If none of the image pull secret is configured in the values.yaml file and the OCP instance already has a secret with the name ibm-entitlement secret, then ibm-entitlement secret will be used by default to pull the images. This means, ibm-entitlement secret is the default when secrets are not configured in the values.yaml.