Configuring - Understanding values.yaml

Following table describes configuration parameters listed in values.yaml file in Helm charts and are used to complete installation. Use the following steps to complete this action:
  • Specify parameters that need to be overridden using the --set key=value[,key=value] argument at Helm install.
    Example:
    helm version 2
    
    helm install --name <release-name> \
    --set cdArgs.cport=9898 \
    ...
    ibm-connect-direct-1.1.x.tgz
    helm version 3
    
    helm install <release-name> \
    --set cdArgs.cport=9898 \
    ...
    ibm-connect-direct-1.1.x.tgz
    
  • Alternatively, provide a YAML file with values specified for configurable parameters when you install a Chart. The values.yaml file can be obtained from the helm chart itself using the following command-
    For Online Cluster
    helm inspect values ibm-helm/ibm-connect-direct > my-values.yaml
    For Offline Cluster
    helm inspect values <path to ibm-connect-direct Helm chart> > my-values.yaml
    Now, edit the parameters in my-values.yaml file and use it for installation.
    Example
    helm version 2
    helm install --name <release-name> -f my-values.yaml ... ibm-connect-direct-1.1.x.tgz
    helm version 3
    
    helm install <release-name> -f my-values.yaml ... ibm-connect-direct-1.1.x.tgz
    
  • To mount extra volumes use any of the following templates.

    For Hostpath
    extraVolumeMounts:
      - name: <name>
        mountPath: <path inside container>
    extraVolume:
      - name: <name same as name in extraVolumeMounts>
        hostPath:
          path: <path on host machine>
          type: DirectoryOrCreate
    For NFS Server
    extraVolumeMounts:
      - name: <name>
        mountPath: <path inside container>
    extraVolume:
      - name: <name same as name in extraVolumeMounts>
        nfs:
          path: <nfs data path>
          server: <server ip>

    Alternatively, this can also be done using --set flag.

    Example

    helm install --name <release-name> --set extraVolume[0].name=<name>,extraVolume[0].hostPath.path=<path on host machine>,extraVolume[0].hostPath.type="DirectoryOrCreate",extraVolumeMounts[0].name=<name same as name in extraVolume>,extraVolumeMounts[0].mountPath=<path inside container> \
    ...
    ibm-connect-direct-1.1.x.tgz
    OR
    
    helm install --name <release-name> --set extraVolume[0].name=<name>,extraVolume[0].nfs.path=<nfs data path>,extraVolume[0].nfs.server=<NFS server IP>, extraVolumeMounts[0].name=<name same as name in extraVolume>,extraVolumeMounts[0].mountPath=<path inside container> \
    ...
    ibm-connect-direct-1.1.x.tgz

    If extra volume is mounted, please make sure container user (cduser/appuser) has proper read/write permission. The required permissions can be provided to the container user supplemental groups or fs groups as applicable. For example - if an extra NFS share is being mounted where customer user resides and its POSIX group ID 3535, then during deployment add this group ID as supplemental group to ensure container user to be member of this group.

  • To use Port Check Ignore List feature, configure as below :

    service.externalTrafficPolicy: "Local"

    Use external IP which should be node's IP where pod is deployed as Port Check Ignore List IP addresses in the initparm.cfg after successful deployment.

Parameter Description Default Value
licenseType Specify prod or non-prod for production or non-production license type respectively prod
license License agreement. Set true to accept the license. false
env.timezone Timezone UTC
arch Node Architecture amd64
replicaCount Number of deployment replicas 1
image.repository Image full name including repository  
image.tag Image tag  
digest.enabled Enable/Disable digest of image to be used false
digest.value The digest value for the image  
image.imageSecrets Image pull secrets  
image.pullPolicy Image pull policy IfNotPresent
cdArgs.nodeName Node name cdnode
cdArgs.crtName Certificate file name  
cdArgs.localCertLabel Specify certificate import label in keystore Client-API
cdArgs.cport Client Port 1363
cdArgs.sport Server Port 1364
saclConfig Configuration for SACL n
cdArgs.configDir Directory for storing Connect:Direct configuration files CDFILES

appUser.name

Name of Non-Admin Connect:Direct User

appuser

appUser.uid

UID of Non-Admin Connect:Direct User  
appUser.gid GID of Non-Admin Connect:Direct User  

storageSecurity.fsGroup

Group ID for File System Group 45678
storageSecurity.supplementalGroups Group ID for Supplemental group 5555
persistence.enabled To use persistent volume true
pvClaim.existingClaimName Provide name of existing PV claim to be used  
persistence.useDynamicProvisioning To use storage classes to dynamically create PV false
pvClaim.accessMode Access mode for PV Claim ReadWriteOnce
pvClaim.storageClassName Storage class of the PVC  
pvClaim.selector.label PV label key to bind this PVC  
pvClaim.selector.value PV label value to bind this PVC  
pvClaim.size Size of PVC volume 100Mi
service.type Kubernetes service type exposing ports LoadBalancer
service.apiport.name API port name api
service.apiport.port API port number 1363
service.apiport.protocol Protocol for service TCP
service.ftport.name Server (File Transfer) Port name ft
service.ftport.port Server (File Transfer) Port number 1364
service.ftport.protocol Protocol for service TCP
service.loadBalancerIP Provide the LoadBalancer IP  
service.loadBalancerSourceRanges Provide Load Balancer Source IP ranges []
service.annotations Provide the annotations for service {}
service.externalTrafficPolicy Specify if external Traffic policy is needed  
service.sessionAffinity Specify session affinity type ClientIP
service.externalIP External IP for service discovery []
networkPolicy.from Provide from specification for network policy for ingress traffic []
networkPolicy.to Provide to specification for network policy for egress traffic []
secret.certSecretName Name of secret resource of certificate files for dynamic provisioning  
secret.secretName Secret name for Connect:Direct password store  
resources.limits.cpu Container CPU limit 500mi
resources.limits.memory Container memory limit 2000Mi
resources.limits.ephemeral-storage Specify ephemeral storage limit size for pod's container "5Gi"
resources.requests.cpu Container CPU requested 500m
resources.requests.memory Container Memory requested 2000Mi
resources.requests.ephemeral-storage Specify ephemeral storage request size for pod's container "3Gi"
serviceAccount.create Enable/disable service account creation true
serviceAccount.name Name of Service Account to use for container  
extraVolumeMounts Extra Volume mounts  
extraVolume Extra volumes  
affinity.nodeAffinity.required

DuringSchedulingIgnoredDuring

Execution

k8sPodSpec.nodeAffinity.required

DuringSchedulingIgnoredDuring

Execution

 
affinity.nodeAffinity.preferred

DuringSchedulingIgnoredDuring

Execution

k8sPodSpec.nodeAffinity.preferred

DuringSchedulingIgnoredDuring

Execution

 
affinity.podAffinity.required

DuringSchedulingIgnoredDuring

Execution

k8s PodSpec.podAntiAffinity.

requiredDuringSchedulingIgnored

DuringExecution

 
affinity.podAffinity.preferred

DuringSchedulingIgnoredDuring

Execution

k8sPodSpec.podAntiAffinity.

preferredDuringScheduling

IgnoredDuringExecution

 
affinity.podAntiAffinity.required

DuringSchedulingIgnoredDuring

Execution

k8sPodSpec.podAntiAffinity.

requiredDuringSchedulingIgnored

DuringExecution

 
affinity.podAntiAffinity.preferred

DuringSchedulingIgnoredDuring

Execution

k8sPodSpec.podAntiAffinity.

preferredDuringSchedulingIgnored

DuringExecution

 
livenessProbe.initialDelaySeconds Initial delay for liveness 45
livenessProbe.timeoutSeconds Timeout for liveness 5
livenessProbe.periodSeconds Time period for liveness 15
readinessProbe.initialDelaySeconds Initial delays for readiness 40
readinessProbe.timeoutSeconds Timeout for readiness 5
readinessProbe.periodSeconds Time period for readiness 25
route.enabled Route for OpenShift Enabled/Disabled false
cduser.uid UID for cduser 45678
cduser.gid GID for cduser 45678
ldap.enabled Enable/Disable LDAP configuration false
ldap.host LDAP server host  
ldap.port LDAP port  
ldap.domain LDAP Domain  
ldap.tls Enable/Disable LDAP TLS false
lap.caCert LDAP CA Certificate name  
ldap.clientValidation Enable/Disable LDAP Client Validation false
ldap.clientCert LDAP Client Certificate name  
ldap.clientKey LDAP Client Certificate key name  
extraLabels Provide extra labels for all resources of this chart {}
cdfa.fileAgentEnable Specify y/n to Enable/Disable File Agent n

Affinity

The chart provides ways in form of node affinity, pod affinity and pod anti-affinity to configure advance pod scheduling in Kubernetes. See, Kubernetes documentation for details.

Note: For exact parameters, its value and its description, please refer to values.yaml file present in the helm chart itself. Untar the helm chart package to see this file inside chart directory.

Understanding LDAP deployment parameters

This section demonstrates the steps required to implement the PAM and SSSD configuration with Connect:Direct UNIX to authenticate external user accounts through OpenLDAP.
  • Updating initparam file: When the LDAP authentication is enabled, the container startup script automatically updates the initparam configuration to support the PAM module. The following line is added to initparam.cfg:
     ndm.pam:service=login:
  • The following packages are pre-installed in the container image to enable the LDAP support:
    openldap-client, sssd, sssd-ldap, openssl-perl, authselect
  • The following default configuration file (/etc/sssd/sssd.conf) is added to the image. You must replace the values highlighted in bold with the values of environment variables as explained in next section.
    [domain/default]
    id_provider = ldap
    autofs_provider = ldap
    auth_provider = ldap
    chpass_provider = ldap
    ldap_uri = LDAP_PROTOCOL://LDAP_HOST:LDAP_PORT
    ldap_search_base = LDAP_DOMAIN
    ldap_id_use_start_tls = True
    ldap_tls_cacertdir = /etc/openldap/certs
    ldap_tls_cert = /etc/openldap/certs/LDAP_TLS_CERT_FILE
    ldap_tls_key = /etc/openldap/certs/LDAP_TLS_KEY_FILE
    cache_credentials = True
    ldap_tls_reqcert = allow
  • Description of the Certificates required for the configuration:
    • Mount certificates inside CDU Container:
      • Copy the certificates needed for LDAP configuration in the mapped directory which is used to share the Connect:Direct Unix secure plus certificates (CDFILES/cdcert directory by default).
    • DNS resolution: If TLS is enabled and hostname of LDAP server is passed as “ldap.host”, then it must be ensured that the hostname is resolved inside the container. It is the responsibility of Cluster Administrator to ensure DNS resolution inside pod's container.
    • Certificates creation and configuration: This section provides a sample way to generate the certificates:
      • LDAP_CACERT - The root and all the intermediate CA certificates needs to be copied in one file.
      • LDAP_CLIENT_CERT – The client certificate which the server must be able to validate.
      • LDAP_CLIENT_KEY – The client certificate key.
    • Use the below new parameters for LDAP configuration:
      • ldap.enabled
      • ldap.host
      • ldap.port
      • ldap.domain
      • ldap.tls
      • ldap.caCert
      • ldap.clientValidation
      • ldap.clientCert
      • ldpa.clientKey