Installing IBM Connect:Direct for UNIX using an IBM Certified Container Software

IBM Certified Container Software (CCS) can be installed on the Kubernetes based cluster.

Kubernetes is an open-source container orchestration engine to automate the deployment, scaling and management of containerized applications. This application release has been qualified and certified on an on-premise Red Hat® OpenShift® Container Platform (OCP) which is an enterprise-ready Kubernetes container platform with full-stack automated operations to manage the deployment life-cycle.

IBM CCS offers a container image and a helm chart. It meets the standard criteria for the packaging and deployment of containerized software. In addition, the container image is Red Hat certified.

For more details, refer to the below links:

Planning

Below are the step by step guide to plan and install the IBM Certified Container Software (CCS) in your cluster. Go through each links one by one and perform the operations as applicable for your deployment needs.

Before you deploy the application, you must use the following information to plan the deployment:
  • Verifying system requirements
  • Application license requirements
  • IBM Licensing and Metering service
  • Certificates files for Secure Plus
  • PSP and SCC requirements
  • Hardening RedHat OpenShift cluster
  • Encrypting etcd data
  • Running external utilities/tools

User Roles

Deployment tasks can be performed by cluster administrator and project administrator. The following table illustrates the type of task that are typically associated with each administrative role. The list is not intended to be exhaustive -
Role Task
Cluster Administrator
  • Creating namespaces (projects)
  • Creating PSP or SCC and assigning them to namespace (project)
  • Configuring Storage for data persistence
  • Providing environmental details
  • Installing and configuring IBM Licensing and Metering Service
Project Administrator
  • Creating Secrets
  • Installing IBM Certified Container Software for Connect Direct UNIX
  • Validating the install
  • Post deployment tasks

Verifying System Requirements

Before you begin the deployment process, verify that your system meets the hardware and software requirements specified for this release.

The Certified Container Software for IBM Connect:Direct® for UNIX has been verified on Red Hat Linux and requires the following minimum hardware and software resources:

Hardware Requirements

  • 1 GB Disk space
  • 500m CPU core
  • 2 GB RAM
  • 100 MB for Persistent Volume
  • 3-5 GB of ephemeral storage
Note:
  • 100 MB is minimum requirement for fresh deployment.
  • For upgrade, make sure you have sufficient space on persistent volume so that backup of application data could reside on persistent volume.
  • For Production, double all of the above requirements and make Persistent Volume size at least 1 GB.

Software Requirements

For Kubernetes cluster:
For Red Hat OpenShift cluster:
Other common requirements:
  • docker/podman to manage container images
  • If NFS is used as backed-storage for Persistent Volume, ensure version >=4.1
Note: For helm 3.2.1 and greater, no Tiller (server) is required.

A certified container deployment strictly enforces the following system requirements. If any of the above requirements are not met, the deployment may fail. If the deployment fails, then review the deployment log for a list of non-compliant items.

Azure Kubernetes Cluster Platform

Azure Kubernetes Service (AKS) is a managed Kubernetes service designed for deploying and managing containerized applications. It requires minimal container orchestration expertise, as AKS reduces complexity and operational overhead by offloading much of the management to Azure. AKS is ideal for applications needing high availability, scalability, and portability. It supports deploying applications across multiple regions, using open-source tools, and integrating with existing DevOps tools. For more information, refer to Deploy an Azure Kubernetes Service (AKS) cluster using Azure portal.

Additionally, ensure proper roles are associated with the user to access resources. You can use the built-in roles or create your own custom roles. For more information, refer to Assign Azure roles using the Azure portal.
Note: For AKS, ensure that proper node taints are set to allow nodes to serve requests to pods.

If the node taint is by default set to critical=true:NoSchedule, it will only serve requests to critical pods. To allow scheduling of all pods on the nodes, use the following command:

az aks nodepool update \
  --cluster-name $CLUSTER_NAME \
  --resource-group $RESOURCE_GROUP_NAME \
  --name $NODE_POOL_NAME \
  --node-taints ""
Here, an empty node taint allows all pods to be scheduled on the nodes. You can verify the node status using the command below:
az aks nodepool list --resource-group $RESOURCE_GROUP_NAME --cluster-name $CLUSTER_NAME

For more details, you can refer to Creating storage for Data Persistence.

Installing OpenShift Container platform

Note: This step is optional. It is only needed if IBM Certified Container Software for Connect:Direct for UNIX deployment is planned for deployment on Redhat OpenShift cluster.
OpenShift container platform brings together Docker and Kubernetes and provides an API to manage these services. OpenShift Container Platform allows you to create and manage containers.

It is an on-premise platform service that uses Kubernetes to manage containers built on a foundation of Red Hat Enterprise Linux. For more information on how to setup an OpenShift container platform cluster environment, see Installing OpenShift.

Brief about IBM Certified Container Software

  • A Helm chart is organized as a collection of files inside a directory by the name of the Chart itself. For more information see, Helm Charts.

    Example Helm Chart

    <Name of a Chart/> 
      Chart.yaml          # A YAML file containing information about the Chart.
      LICENSE             # OPTIONAL: A plain text file containing the license for the Chart.
      README.md           # OPTIONAL: A README file.
      requirements.yaml   # OPTIONAL: A YAML file listing dependencies for the Chart.
      values.yaml         # The default configuration values for this Chart.
      Charts/             # A directory containing any Charts upon which this Chart depends.
      templates/          # A directory of templates that, when combined with values, generates valid Kubernetes manifest files.
      templates/NOTES.txt # OPTIONAL: A plain text file containing short usage notes.
    • This Helm chart deploys IBM Connect:Direct for Unix on a container management platform with the following resource deployments:
      • statefulset pod <release-name>-IBM-connect-direct-0

        1 replica by default

      • configMap <release-name>-ibm-connect-direct

        This is used to provide default configuration in cd_param_file.

      • service <release-name>-ibm-connect-direct

        This is used to expose the IBM Connect:Direct services for accessing using clients.

      • service-account <release-name>-ibm-connect-direct-serviceaccount

        This service will not be created if serviceAccount.create is false.

      • persistence volume claim <release-name>-ibm-connect-direct.
        Note: If the release name is greater than 15 characters then the pod name may be truncated.
  • Certified Container Software commands

    For more information on other commands and options, see Helm Commands.

    1. To install a Chart
      $ helm install
    2. To upgrade to a new release
      $ helm upgrade
    3. To rollback a release to a previous version
      $ helm rollback
    4. To delete the release from Kubernetes.
      $ helm delete

Application license requirements

You must read the IBM Connect:Direct for Unix License agreement terms before deploying the software. The license number is 'L-MTAE-C5RR2U'.

To accept the license, set license variable to true at Helm CLI installation command. If license variable is set to false then deployment of IBM Certified Container Software for Connect:Direct for UNIX would not be successful. For more information see, Configuring - Understanding values.yaml .

The IBM Certified Container Software for Connect:Direct for UNIX is deployed as non-production by default. You can override this default behavior by changing the licenseType variable to prod. The licenseType value would be used to annotate the IBM Certified Container Software for Connect:Direct for UNIX, which would be eventually used by Licensing and Metering service tool.

IBM Licensing and Metering service

The IBM Certified Container Software for Connect Direct UNIX has been integrated with IBM Licensing and Metering service using Operator. This service collects information about license usage of IBM Certified Container Software for Connect Direct UNIX.

You can use the `ibm-licensing-operator` to install the IBM Licensing and Metering service on any Kubernetes based cluster. License Service collects information about license usage of IBM Containerized Products. You can retrieve license usage data through a dedicated API call and generate an audit snapshot on demand cluster without IBM Cloud Pak.

For the installation overview see, License Service deployment.

For retrieving the licensing information see, Track license usage .

Standard User Mode in IBM Connect:Direct for Unix Containers

In Standard User Mode, available in a container deployment, the Certified Container runs with standard privileges. In this mode, all the processes of IBM® Connect:Direct for UNIX will run as Standard Users (non-root users). In order to achieve this, IBM Connect:Direct for UNIX has introduced two types of user roles as described below:
  • Connect:Direct Administrator user role- The administrative Connect:Direct user who installs IBM Connect:Direct for UNIX and own its files/directories. This user will be focused on Connect:Direct Administrative tasks like configuration changes, etc.
  • Connect:Direct Non-Administrative role- A non-administrative Connect:Direct user focused on actual file transfer, etc.

From IBM Connect:Direct for UNIX container perspective, cdadmin will have Connect:Direct Administrator user role capability and appuser will have Connect:Direct Non-Administrative user role capability. Note that these are real users in IBM Connect:Direct for UNIX container. All files/directories will be owned by cdadmin and Connect:Direct service would be started as appuser so processes (cdpmgr, statmgr, ndmsmgr, etc) will run as appuser only.

In Standard User Mode, the Pod Security Policy (PSP) and Security Context Constraints (SCC) also has lesser linux capabilities. Now, it is more closer to the restricted SCC. A new templating parameter has been added to the helm chart that controls the mode of IBM Connect:Direct for UNIX operations. The parameter name is oum.enabled and its default value is "y" which means Standard User Mode is enabled.

Following are the lists of privileges when the container runs in superuser mode and Standard User Mode:
  • Super user mode- CHOWN, SETGID, SETUID, DAC_OVERRIDE, FOWNER, AUDIT_WRITE, SYS_CHROOT
  • Standard user mode- SETGID, SETUID, DAC_OVERRIDE, AUDIT_WRITE
Few points to be noted for Standard User Mode in container:
  1. appuser password is mandatory to enable Standard User Mode. Refer Creating secret for providing appuser password during deployment.
  2. The admin login to IBM Connect:Direct for UNIX node with Standard User Mode enabled on IBM Connect:Direct® Web services will be supported only through Certificate based authentication. After login, this user can perform all administrative tasks like configuration updates, etc. However, this user should not submit any file transfer request as it's not supported.
  3. Login from IBM Connect:Direct® Web services using user/password to IBM Connect:Direct for UNIX container with Standard User Mode is limited to appuser only.
  4. The basic configuration setting of Integrated File Agent should be configured to use appuser for submitting file transfer.
  5. The Standard user mode is supported only in new installation of Certified Container Software (CCS). CCS versions lower than IBM Connect:Direct for UNIX 6.3.0 should not be upgraded with Standard User Mode enabled. They should only be upgraded with oum.enabled="n" ie setting Standard User Mode as disabled.
  6. LDAP support is not available in Standard User Mode.
  7. PNODEID/SNODEID in process to be submitted is only supported as appuser. No other user can be authenticated as PNODEID/SNODEID.

Certificates files for Secure Plus

The IBM Certified Container Software for Connect:Direct UNIX application provided enhanced security for IBM Connect:Direct and is available as a separate component. It uses cryptography to secure data during transmission. By default security protocols are TLS 1.2 and TLS.13.

For configuring Secure Plus while installing IBM Certified Container Software for Connect:Direct UNIX you need certificate file. You must ensure valid certificates files are used during the deployment. Keep the certificates files handy during the IBM Certified Container Software for Connect Direct UNIX deployment. For more information on Secure plus and certificate file, refer Introduction to Connect:Direct Secure Plus for UNIX and Certificate Files.

Pod Security Standard, Pod Security Policy and Security Context Constraints requirements

With Kubernetes v1.25, the Pod Security Admission (PSA) controller has been introduced replacing the older Pod Security Policy (PSP). Kubernetes has defined Pod Security Standards (PSS) which can be applied at the namespace level. This helm chart is compatible with baseline standards with enforce security level. For more information on PSS, refer - Pod Security Standards.

So, if deployment is planned on Kubernetes v1.25 and above, then consider PSS. Else, Pod Security Policy should be the way to go.

If planning to upgrade from older Kubernetes to v1.25 and above, refer to Kubernetes Migrate from PSP documentation to understand the migrating from PSP to the built-in PSA controller.

Both Pod Security Policy (PSP) and Security Context Constraints (SCC) are cluster-level resource and allows the administrator to control the security aspects of pods in Kubernetes and Red Hat OpenShift clusters respectively.

Depending on the cluster environment, IBM Certified Container Software for Connect Direct UNIX requires PSP or SCC to be tied to the target namespace prior to deployment. Since, it is a cluster level resource, discuss this with your Cluster Administrator as they are needed to create these resources.

For more information on PSP see, Creating Pod Security Policy for Kubernetes cluster and Security Context Constraints for OpenShift cluster.

Hardening RedHat OpenShift Cluster

This is not a mandatory requirement for IBM CCS installation but a security aspect to be understood to make your cluster more secure.

If you are planning to deploy on OpenShift cluster then, there are certain guidelines by OpenShift. See here for more details, Hardening RedHat OpenShift cluster.

Encrypting etcd data

This is not a mandatory requirement for IBM CCS installation but a security aspect to be understood to make your cluster more secure.

By default, etcd data is not encrypted in Kubernetes/OpenShift cluster. You can enable etcd encryption for your cluster to provide an additional layer of data security (data at rest). For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties. For more information, refer Kubernetes and Red Hat OpenShift.

Running External Utilities/Tools

Most cloud providers allow their command-line tools, such as AWS CLI, Azure CLI, Google Cloud CLI, and IBM Cloud CLI, to be installed in a custom directory. You can install these tools in a specific storage location and use them in IBM Sterling Connect:Direct for UNIX Containers.

To run these tools:
  1. Install and set up the required cloud CLI in a custom directory on your storage, which can be mounted on the pod.
  2. Mount the custom directory to the pod using chart-parameterized values during deployment (install or upgrade) to ensure that the CLI tools are accessible inside the pod.
  3. When referring to the CLI inside the container, always specify the full path. For example, when using the tool in runtask or runjob in a process, specify the full path of the CLI and use it as needed.
Note: This process also applies to other utilities that support installation in a custom directory.

Installing

After reviewing the system requirements and other planning information, you can proceed to install IBM Certified Container Software for Connect Direct UNIX.

The following tasks represent the typical task flow for performing the installation:

Setting up your registry server

To install IBM Certified Container Software for Connect:Direct for UNIX, you must have a registry server where you can host the image required for installation.

Using the existing registry server

If you have an existing registry server, you can use it, provided that it is in close proximity to cluster where you will deploy IBM Certified Container Software for Connect:Direct for UNIX. If your registry server is not in close proximity to your cluster, you might notice performance issues. Also, before the installation, ensure that pull secrets are created in the namespace/project and are linked to the service accounts. You will need to properly manage these pull secrets. This pull secret can be updated in values.yaml file `image.imageSecrets`.

Using Docker registry

Kubernetes does not provide a registry solution out of the based. However, you can create your own registry server and host your images. Please refer to the deployment of registry server.

Setting up Namespace or project

To install IBM Certified Container Software for Connect:Direct for UNIX, you must have an existing namespace/project or create a new if required.

You can either use an existing namespace or create a new one in Kubernetes cluster. Similarly, you either use an existing project or create a new one in OpenShift cluster. A namespace or project is a cluster resource. So, it can only be created by a Cluster Administrator. Refer the following links for more details -

For Kubernetes - Namespaces

For Red Hat OpenShift - Working with projects

The IBM Certified Container Software for Connect:Direct for UNIX has been integrated with IBM Licensing and Metering service using Operator. You need to install this service. For more information, refer to License Service deployment without an IBM Cloud Pak.

Installing and configuring IBM Licensing and Metering service

License Service is required for monitoring and measuring license usage of IBM Certified Container Software for Connect:Direct for UNIX in accordance with the pricing rule for containerized environments. Manual license measurements are not allowed. Deploy License Service on all clusters where IBM Certified Container Software for Connect:Direct for UNIX is installed.

The IBM Certified Container Software for Connect:Direct for UNIX contains an integrated service for measuring the license usage at the cluster level for license evidence purposes.

Overview

The integrated licensing solution collects and stores the license usage information which can be used for audit purposes and for tracking license consumption in cloud environments. The solution works in the background and does not require any configuration. Only one instance of the License Service is deployed per cluster regardless of the number of containerized products that you have installed on the cluster.

Deploying License Service

Deploy License Service on each cluster where IBM FHIR Server is installed. License Service can be deployed on any Kubernetes based orchestration cluster. For more information about License Service, how to install and use it, see the License Service documentation.

Validating if License Service is deployed on the cluster

To ensure license reporting continuity for license compliance purposes make sure that License Service is successfully deployed. It is recommended to periodically verify whether it is active. To validate whether License Service is deployed and running on the cluster, you can, for example, log into the Kubernetes or Redhat OpenShift cluster and run the following command:
For Kubernetes
kubectl get pods --all-namespaces | grep ibm-licensing | grep -v operator
For Redhat OpenShift
oc get pods --all-namespaces | grep ibm-licensing | grep -v operator

The following response is a confirmation of successful deployment:

1/1 Running

Archiving license usage data

Remember to archive the license usage evidence before you decommission the cluster where IBM Certified Container Software for Connect:Direct for UNIX Server was deployed. Retrieve the audit snapshot for the period when IBM Certified Container Software for Connect:Direct for UNIX was on the cluster and store it in case of audit.

For more information about the licensing solution, see License Service documentation.

Downloading the Certified Container Software

Before you install IBM Certified Container Software for Connect:Direct for UNIX, ensure that the installation files are available on your client system.

Depending on the availability of internet on the cluster, the following procedures can be followed. Choose the one which applies best for your environment.

Online Cluster

The cluster which has access to the internet is called Online cluster. You may have a Kubernetes or OpenShift cluster and it has access to the internet. The process to get required installation files consists of two steps:
  1. Create the entitled registry secret: Complete the following steps to create a secret with the entitled registry key value:
    1. Ensure that you have obtained the entitlement key that is assigned to your ID.
      1. Log in to My IBM Container Software Library by using the IBM ID and password that are associated with the entitled software.
      2. In the Entitlement keys section, select Copy key to copy the entitlement key to the clipboard.
      3. Save the entitlement key to a safe location for later use.
        To confirm that your entitlement key is valid, click View library that is provided in the left of the page. You can view the list of products that you are entitled to. If IBM Connect:Direct for Unix is not listed, or if the View library link is disabled, it indicates that the identity with which you are logged in to the container library does not have an entitlement for IBM Connect:Direct for Unix. In this case, the entitlement key is not valid for installing the software.
      Note: For assistance with the Container Software Library (e.g. product not available in the library; problem accessing your entitlement registry key), contact MyIBM Order Support.
    2. Set the entitled registry information by completing the following steps:
      1. Log on to machine from where the cluster is accessible
      2. export ENTITLED_REGISTRY=cp.icr.io
      3. export ENTITLED_REGISTRY_USER=cp
      4. export ENTITLED_REGISTRY_KEY=<entitlement_key>
    3. This step is optional. Log on to the entitled registry with the following docker login command:
      docker login "$ENTITLED_REGISTRY" -u "$ENTITLED_REGISTRY_USER" -p "$ENTITLED_REGISTRY_KEY"
      
    4. Create a Docker-registry secret:
       kubectl create secret docker-registry <any_name_for_the_secret> --docker-username=$ENTITLED_REGISTRY_USER --docker-password=$ENTITLED_REGISTRY_KEY --docker-server=$ENTITLED_REGISTRY -n <your namespace/project name>
      
    5. Update the service account or helm chart image pull secret configurations using `image.imageSecrets` parameter with the above secret name.
  2. Download the Helm chart: You can follow the steps below to download the helm chart from the repository.
    1. Make sure that the helm client (CLI) is present on your machine. Execute/run helm CLI on machine and you should be able to see the usage of helm CLI.
      helm
    2. Check the ibm-helm repository in your helm CLI.
      helm repo list
      If the ibm-helm repository already exists with URL https://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm, then update the local repository else add the repository.
    3. Update the local repository, if ibm-helm repository already exists on helm CLI.
      helm repo update
    4. Add the helm chart repository to local helm CLI if it does not exist.
      helm repo add ibm-helm https://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm
    5. List ibm-connect-direct helm charts available on repository.
      helm search repo -l ibm-connect-direct
    6. Download the latest helm chart.
      helm pull ibm-helm/ibm-connect-direct 
      At this point we have a locally present helm chart and an Entitled registry secret. Make sure you configure the helm chart to use the Entitled registry secret to download the required container image for deploying the IBM Connect:Direct for UNIX chart.

Offline (Airgap) Cluster

You have a Kubernetes or OpenShift cluster but it is a private cluster which means it does not have the internet access. Depending upon the cluster, follow the below procedures to get the installation files.

For Kubernetes Cluster
Since, your Kubernetes cluster is private and it does not have internet access, you cannot download the required installation files directly from the server. By following steps below, you can get the required files.
  1. Get an RHEL machine which has
    • internet access
    • Docker CLI (docker) or Podman CLI (podman)
    • kubectl
    • helm
  2. Download the Helm chart by following the steps mentioned in the Online installation section.
  3. Extract the downloaded helm chart.
    tar -zxf <ibm-connect-direct-helm chart-name>
  4. Get the container image detail:
    erRepo=$(grep -w "repository:" ibm-connect-direct/values.yaml |cut -d '"' -f 2)
    erTag=$(grep -w "tag:" ibm-connect-direct/values.yaml | cut -d '"' -f 2)
    erImgTag=$erRepo:$erTag
  5. This step is optional if you already have a docker registry running on this machine. Create a docker registry on this machine. Follow Setting up your registry server.
  6. Get the Entitled registry entitlement key by following steps a and b explained in Online Cluster under Create the entitled registry section.
  7. Get the container image downloaded in docker registry:
    docker login "$ENTITLED_REGISTRY" -u "$ENTITLED_REGISTRY_USER" -p
    "$ENTITLED_REGISTRY_KEY"
    docker pull $erImgTag
    Note: Skip step 8, 9 and 10, if the cluster where deployment will be performed is accessible from this machine and cluster can fetch container images from registry running on this machine.
  8. Save the container image.
    docker save -o <container image file name.tar> $erImgTag
  9. Copy/Transfer the installation files to your cluster. At this point you have both downloaded container image and helm chart for IBM Connect:Direct for UNIX. You need to transfer these two file to a machine from where you can access your cluster and its registry.
  10. After transferring the files, load the container image into your registry.
    docker load -i <container image file name.tar>
For Red Hat OpenShift Cluster

If your cluster is not connected to the internet, the deployment can be done in your cluster via connected or disconnected mirroring.

If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring. If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring.

Before you begin
You must complete the steps in the following sections before you begin generating mirror manifests:
Important: If you intend on installing using a private container registry, your cluster must support ImageContentSourcePolicy (ICSP).
Prerequisites

Regardless of whether you plan to mirror the images with a bastion host or to the file system, you must satisfy the following prerequisites:
  • Red Hat® OpenShift® Container Platform requires you to have cluster admin access to run the deployment.
  • A Red Hat® OpenShift® Container Platform cluster must be installed.
Prepare a host

If you are in an air-gapped environment, you must be able to connect a host to the internet and mirror registry for connected mirroring or mirror images to file system which can be brought to a restricted environment for disconnected mirroring. For information on the latest supported operating systems, see ibm-pak plugin install documentation.

The following table explains the software requirements for mirroring the IBM Cloud Pak images:
Table 1. software requirements for mirroring the IBM Cloud Pak images
Software Purpose
Docker Container management
Podman Container management
Red Hat OpenShift CLI (oc) Red Hat OpenShift Container Platform administration
Complete the following steps on your host:
  1. Install Docker or Podman.
    To install Docker (for example, on Red Hat® Enterprise Linux®), run the following commands:
    Note: If you are installing as a non-root user you must use sudo. For more information, refer to the Podman or Docker documentation for installing as a non-root user.
    yum check-update
    yum install docker
    

    To install Podman, see Podman Installation Instructions.

  2. Install the oc Red Hat® OpenShift® Container Platform CLI tool.
  3. Download and install the most recent version of IBM Catalog Management Plug-in for IBM Cloud Paks from the IBM/ibm-pak. Extract the binary file by entering the following command:
    tar -xf oc-ibm_pak-linux-amd64.tar.gz
    Run the following command to move the file to the /usr/local/bin directory:
    Note: If you are installing as a non-root user you must use sudo. For more information, refer to the Podman or Docker documentation for installing as a non-root user.
    mv oc-ibm_pak-linux-amd64 /usr/local/bin/oc-ibm_pak
    Note: Download the plug-in based on the host operating system. You can confirm that oc ibm-pak -h is installed by running the following command:
    oc ibm-pak --help

    The plug-in usage is displayed.

    For more information on plug-in commands, see command-help.

Your host is now configured and you are ready to mirror your images.

Creating registry namespaces

Top-level namespaces are the namespaces which appear at the root path of your private registry. For example, if your registry is hosted at myregistry.com:5000, then mynamespace in myregistry.com:5000/mynamespace is defined as a top-level namespace. There can be many top-level namespaces.

When the images are mirrored to your private registry, it is required that the top-level namespace where images are getting mirrored already exists or can be automatically created during the image push. If your registry does not allow automatic creation of top-level namespaces, you must create them manually.

When you generate mirror manifests, you can specify the top-level namespace where you want to mirror the images by setting TARGET_REGISTRY to myregistry.com:5000/mynamespace which has the benefit of needing to create only one namespace mynamespace in your registry if it does not allow automatic creation of namespaces. The top-level namespaces can also be provided in the final registry by using --final-registry.

If you do not specify your own top-level namespace, the mirroring process will use the ones which are specified by the CASEs. For example, it will try to mirror the images at myregistry.com:5000/cp etc.

So if your registry does not allow automatic creation of top-level namespaces and you are not going to use your own during generation of mirror manifests, then you must create the following namespace at the root of your registry.
  • cp

There can be more top-level namespaces that you might need to create. See section on Generate mirror manifests for information on how to use the oc ibm-pak describe command to list all the top-level namespaces.

Set environment variables and download CASE files

If your host must connect to the internet via a proxy, you must set environment variables on the machine that accesses the internet via the proxy server.

If you are mirroring via connected mirroring, set the following environment variables on the machine that accesses the internet via the proxy server:
export https_proxy=http://proxy-server-hostname:port
export http_proxy=http://proxy-server-hostname:port

# Example:
export https_proxy=http://server.proxy.xyz.com:5018
export http_proxy=http://server.proxy.xyz.com:5018
Before mirroring your images, you can set the environment variables on your mirroring device, and connect to the internet so that you can download the corresponding CASE files. To finish preparing your host, complete the following steps:
Note: Save a copy of your environment variable values to a text editor. You can use that file as a reference to cut and paste from when you finish mirroring images to your registry.
  1. Create the following environment variables with the installer image name and the version.
    export CASE_NAME=ibm-connect-direct

    To find the CASE name and version, see IBM: Product CASE to Application Version.

  2. Connect your host to the intranet.
  3. The plug-in can detect the locale of your environment and provide textual helps and messages accordingly. You can optionally set the locale by running the following command:
    oc ibm-pak config locale -l LOCALE

    where LOCALE can be one of de_DE, en_US, es_ES, fr_FR, it_IT, ja_JP, ko_KR, pt_BR, zh_Hans, zh_Hant.

  4. Configure the plug-in to download CASEs as OCI artifacts from IBM Cloud Container Registry (ICCR).
    oc ibm-pak config repo 'IBM Cloud-Pak OCI registry' -r oci:cp.icr.io/cpopen --enable
  5. Enable color output (optional with v1.4.0 and later)
    oc ibm-pak config color --enable true
  6. Download the image inventory for your IBM Cloud Pak to your host.
    Tip: If you do not specify the CASE version, it will download the latest CASE.
    oc ibm-pak get \
    $CASE_NAME \
    --version $CASE_VERSION
    

By default, the root directory used by plug-in is ~/.ibm-pak. This means that the preceding command will download the CASE under ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION. You can configure this root directory by setting the IBMPAK_HOME environment variable. Assuming IBMPAK_HOME is set, the preceding command will download the CASE under $IBMPAK_HOME/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION.

The logs files will be available at $IBMPAK_HOME/.ibm-pak/logs/oc-ibm_pak.log.

Your host is now configured and you are ready to mirror your images.

Mirroring images to your private container registry

The process of mirroring images takes the image from the internet to your host, then effectively copies that image to your private container registry. After you mirror your images, you can configure your cluster and complete air-gapped installation.

Complete the following steps to mirror your images from your host to your private container registry:
  1. Generate mirror manifests
  2. Authenticating the registry
  3. Mirror images to final location
  4. Configure the cluster
  5. Install IBM Cloud® Paks by way of Red Hat OpenShift Container Platform
Generate mirror manifests
Note:
  • If you want to install subsequent updates to your air-gapped environment, you must do a CASE get to get the image list when performing those updates. A registry namespace suffix can optionally be specified on the target registry to group mirrored images.

  • Define the environment variable $TARGET_REGISTRY by running the following command:
    export TARGET_REGISTRY=<target-registry>
    

    The <target-registry> refers to the registry (hostname and port) where your images will be mirrored to and accessed by the oc cluster. For example setting TARGET_REGISTRY to myregistry.com:5000/mynamespace will create manifests such that images will be mirrored to the top-level namespace mynamespace.

  • Run the following commands to generate mirror manifests to be used when mirroring from a bastion host (connected mirroring):
    oc ibm-pak generate mirror-manifests \
       $CASE_NAME \
       $TARGET_REGISTRY \
       --version $CASE_VERSION
    
    Example ~/.ibm-pak directory structure for connected mirroring
    The ~/.ibm-pak directory structure is built over time as you save CASEs and mirror. The following tree shows an example of the ~/.ibm-pak directory structure for connected mirroring:
    tree ~/.ibm-pak
    /root/.ibm-pak
    ├── config
    │   └── config.yaml
    ├── data
    │   ├── cases
    │   │   └── YOUR-CASE-NAME
    │   │       └── YOUR-CASE-VERSION
    │   │           ├── XXXXX
    │   │           ├── XXXXX
    │   └── mirror
    │       └── YOUR-CASE-NAME
    │           └── YOUR-CASE-VERSION
    │               ├── catalog-sources.yaml
    │               ├── image-content-source-policy.yaml
    │               └── images-mapping.txt
    └── logs
       └── oc-ibm_pak.log
    

    Notes: A new directory ~/.ibm-pak/mirror is created when you issue the oc ibm-pak generate mirror-manifests command. This directory holds the image-content-source-policy.yaml, images-mapping.txt, and catalog-sources.yaml files.

    Tip: If you are using a Red Hat® Quay.io registry and need to mirror images to a specific organization in the registry, you can target that organization by specifying:
       export ORGANIZATION=<your-organization>
       oc ibm-pak generate mirror-manifests
       $CASE_NAME
       $TARGET_REGISTRY/$ORGANIZATION
       --version $CASE_VERSION
    
You can also generate manifests to mirror images to an intermediate registry server, then mirroring to a final registry server. This is done by passing the final registry server as an argument to --final-registry:
   oc ibm-pak generate mirror-manifests \
      $CASE_NAME \
      $INTERMEDIATE_REGISTRY \
      --version $CASE_VERSION
      --final-registry $FINAL_REGISTRY

In this case, in place of a single mapping file (images-mapping.txt), two mapping files are created.

  1. images-mapping-to-registry.txt
  2. images-mapping-from-registry.txt
  1. Run the following commands to generate mirror manifests to be used when mirroring from a file system (disconnected mirroring):
    oc ibm-pak generate mirror-manifests \
       $CASE_NAME \
       file://local \
       --final-registry $TARGET_REGISTRY
    
    Example ~/.ibm-pak directory structure for disconnected mirroring
    The following tree shows an example of the ~/.ibm-pak directory structure for disconnected mirroring:
    tree ~/.ibm-pak
    /root/.ibm-pak
    ├── config
    │   └── config.yaml
    ├── data
    │   ├── cases
    │   │   └── ibm-cp-common-services
    │   │       └── 1.9.0
    │   │           ├── XXXX
    │   │           ├── XXXX
    │   └── mirror
    │       └── ibm-cp-common-services
    │           └── 1.9.0
    │               ├── catalog-sources.yaml
    │               ├── image-content-source-policy.yaml
    │               ├── images-mapping-to-filesystem.txt
    │               └── images-mapping-from-filesystem.txt
    └── logs
       └── oc-ibm_pak.log
    
    Note: A new directory ~/.ibm-pak/mirror is created when you issue the oc ibm-pak generate mirror-manifests command. This directory holds the image-content-source-policy.yaml, images-mapping-to-filesystem.txt, images-mapping-from-filesystem.txt, and catalog-sources.yaml files.
Tip: Some products support the ability to generate mirror manifests only for a subset of images using the --filter argument and image grouping. The --filter argument provides the ability to customize which images are mirrored during an air-gapped installation. As an example for this functionality ibm-cloud-native-postgresql CASE can be used, which contains groups that allow mirroring specific variant of ibm-cloud-native-postgresql (Standard or Enterprise). Use the --filter argument to target a variant of ibm-cloud-native-postgresql to mirror rather than the entire library. The filtering can be applied for groups and architectures. Consider the following command:
   oc ibm-pak generate mirror-manifests \
      ibm-cloud-native-postgresql \
      file://local \
      --final-registry $TARGET_REGISTRY \
      --filter $GROUPS

The command was updated with a --filter argument. For example, for $GROUPS equal to ibmEdbStandard the mirror manifests will be generated only for the images associated with ibm-cloud-native-postgresql in its Standard variant. The resulting image group consists of images in the ibm-cloud-native-postgresql image group as well as any images that are not associated with any groups. This allows products to include common images as well as the ability to reduce the number of images that you need to mirror.

Note: You can use the following command to list all the images that will be mirrored and the publicly accessible registries from where those images will be pulled from:
   oc ibm-pak describe $CASE_NAME --version $CASE_VERSION --list-mirror-images
Tip: The output of the preceding command will have two sections:
  1. Mirroring Details from Source to Target Registry
  2. Mirroring Details from Target to Final Registry. A connected mirroring path that does not involve a intermediate registry will only have the first section.

    Note down the Registries found sub sections in the preceding command output. You will need to authenticate against those registries so that the images can be pulled and mirrored to your local registry. See the next steps on authentication. The Top level namespaces found section shows the list of namespaces under which the images will be mirrored. These namespaces should be created manually in your registry (which appears in the Destination column in the above command output) root path if your registry does not allow automatic creation of namespaces.

Authenticating the registry

Complete the following steps to authenticate your registries:

  1. Store authentication credentials for all source Docker registries.

    Your product might require one or more authenticated registries. The following registries require authentication:

    • cp.icr.io
    • registry.redhat.io
    • registry.access.redhat.com

    You must run the following command to configure credentials for all target registries that require authentication. Run the command separately for each registry:

    Note: The export REGISTRY_AUTH_FILE command only needs to run once.
    export REGISTRY_AUTH_FILE=<path to the file which will store the auth credentials generated on podman login>
    podman login <TARGET_REGISTRY>
    
    Important: When you log in to cp.icr.io, you must specify the user as cp and the password which is your Entitlement key from the IBM Cloud Container Registry. For example:
    podman login cp.icr.io
    Username: cp
    Password:
    Login Succeeded!
    

For example, if you export REGISTRY_AUTH_FILE=~/.ibm-pak/auth.json, then after performing podman login, you can see that the file is populated with registry credentials.

If you use docker login, the authentication file is typically located at $HOME/.docker/config.json on Linux or %USERPROFILE%/.docker/config.json on Windows. After docker login you should export REGISTRY_AUTH_FILE to point to that location. For example in Linux you can issue the following command:
export REGISTRY_AUTH_FILE=$HOME/.docker/config.json
Table 2. Table 2. Directory description
Directory Description
~/.ibm-pak/config Stores the default configuration of the plug-in and has information about the public GitHub URL from where the cases are downloaded.
~/.ibm-pak/data/cases This directory stores the CASE files when they are downloaded by issuing the oc ibm-pak get command.
~/.ibm-pak/data/mirror This directory stores the image-mapping files, ImageContentSourcePolicy manifest in image-content-source-policy.yaml and CatalogSource manifest in one or more catalog-sourcesXXX.yaml. The files images-mapping-to-filesystem.txt and images-mapping-from-filesystem.txt are input to the oc image mirror command, which copies the images to the file system and from the file system to the registry respectively.
~/.ibm-pak/data/logs This directory contains the oc-ibm_pak.log file, which captures all the logs generated by the plug-in.
Mirror images to final location

Complete the steps in this section on your host that is connected to both the local Docker registry and the Red Hat® OpenShift® Container Platform cluster.

  1. Mirror images to the final location.

    • For mirroring from a bastion host (connected mirroring):

      Mirror images to the TARGET_REGISTRY:
       oc image mirror \
         -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \
         --filter-by-os '.*'  \
         -a $REGISTRY_AUTH_FILE \
         --insecure  \
         --skip-multiple-scopes \
         --max-per-registry=1 \
         --continue-on-error=true
      

      If you generated manifests in the previous steps to mirror images to an intermediate registry server followed by a final registry server, run the following commands:

      1. Mirror images to the intermediate registry server:
        oc image mirror \
          -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-registry.txt \
          --filter-by-os '.*'  \
          -a $REGISTRY_AUTH_FILE \
          --insecure  \
          --skip-multiple-scopes \
          --max-per-registry=1 \
          --continue-on-error=true
        
      2. Mirror images from the intermediate registry server to the final registry server:
        oc image mirror \
          -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-registry.txt \
          --filter-by-os '.*'  \
          -a $REGISTRY_AUTH_FILE \
          --insecure  \
          --skip-multiple-scopes \
          --max-per-registry=1 \
          --continue-on-error=true
        

        The oc image mirror --help command can be run to see all the options available on the mirror command. Note that we use continue-on-error to indicate that the command should try to mirror as much as possible and continue on errors.

        oc image mirror --help
        
        Note: Sometimes based on the number and size of images to be mirrored, the oc image mirror might take longer. If you are issuing the command on a remote machine it is recommended that you run the command in the background with a nohup so even if network connection to your remote machine is lost or you close the terminal the mirroring will continue. For example, the below command will start the mirroring process in background and write the log to my-mirror-progress.txt.
        nohup oc image mirror \
        -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \
        -a $REGISTRY_AUTH_FILE \
        --filter-by-os '.*' \
        --insecure \
        --skip-multiple-scopes \
        --max-per-registry=1 \
        --continue-on-error=true > my-mirror-progress.txt  2>&1 &
        
        You can view the progress of the mirror by issuing the following command on the remote machine:
        tail -f my-mirror-progress.txt
        
    • For mirroring from a file system (disconnected mirroring):

      Mirror images to your file system:
       export IMAGE_PATH=<image-path>
       oc image mirror \
         -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-filesystem.txt \
         --filter-by-os '.*'  \
         -a $REGISTRY_AUTH_FILE \
         --insecure  \
         --skip-multiple-scopes \
         --max-per-registry=1 \
         --continue-on-error=true \
         --dir "$IMAGE_PATH"
      

      The <image-path> refers to the local path to store the images. For example, in the previous section if provided file://local as input during generate mirror-manifests, then the preceding command will create a subdirectory v2/local inside directory referred by <image-path> and copy the images under it.

    The following command can be used to see all the options available on the mirror command. Note that continue-on-error is used to indicate that the command should try to mirror as much as possible and continue on errors.

    oc image mirror --help
    
    Note: Sometimes based on the number and size of images to be mirrored, the oc image mirror might take longer. If you are issuing the command on a remote machine, it is recommended that you run the command in the background with nohup so that even if you lose network connection to your remote machine or you close the terminal, the mirroring will continue. For example, the following command will start the mirroring process in the background and write the log to my-mirror-progress.txt.
     export IMAGE_PATH=<image-path>
     nohup oc image mirror \
       -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-filesystem.txt \
       --filter-by-os '.*' \
       -a $REGISTRY_AUTH_FILE \
       --insecure \
       --skip-multiple-scopes \
       --max-per-registry=1 \
       --continue-on-error=true \
       --dir "$IMAGE_PATH" > my-mirror-progress.txt  2>&1 &
    

    You can view the progress of the mirror by issuing the following command on the remote machine:

    tail -f my-mirror-progress.txt
    
  2. For disconnected mirroring only: Continue to move the following items to your file system:

    • The <image-path> directory you specified in the previous step
    • The auth file referred by $REGISTRY_AUTH_FILE
    • ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-filesystem.txt
  3. For disconnected mirroring only: Mirror images to the target registry from file system

    Complete the steps in this section on your file system to copy the images from the file system to the $TARGET_REGISTRY. Your file system must be connected to the target docker registry.

    Important: If you used the placeholder value of TARGET_REGISTRY as a parameter to --final-registry at the time of generating mirror manifests, then before running the following command, find and replace the placeholder value of TARGET_REGISTRY in the file, images-mapping-from-filesystem.txt, with the actual registry where you want to mirror the images. For example, if you want to mirror images to myregistry.com/mynamespace then replace TARGET_REGISTRY with myregistry.com/mynamespace.
    1. Run the following command to copy the images (referred in the images-mapping-from-filesystem.txt file) from the directory referred by <image-path> to the final target registry:
      export IMAGE_PATH=<image-path>
      oc image mirror \
        -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-filesystem.txt \
        -a $REGISTRY_AUTH_FILE \
        --from-dir "$IMAGE_PATH" \
        --filter-by-os '.*' \
        --insecure \
        --skip-multiple-scopes \
        --max-per-registry=1 \
        --continue-on-error=true
Configure the cluster
  1. Update the global image pull secret for your Red Hat OpenShift cluster. Follow the steps in Updating the global cluster pull secret.

    The documented steps in the link enable your cluster to have proper authentication credentials in place to pull images from your TARGET_REGISTRY as specified in the image-content-source-policy.yaml which you will apply to your cluster in the next step.

  2. Create ImageContentSourcePolicy

    Important:
    • Before you run the command in this step, you must be logged into your OpenShift cluster. Using the oc login command, log in to the Red Hat OpenShift Container Platform cluster where your final location resides. You can identify your specific oc login by clicking the user drop-down menu in the Red Hat OpenShift Container Platform console, then clicking Copy Login Command.

      • If you used the placeholder value of TARGET_REGISTRY as a parameter to --final-registry at the time of generating mirror manifests, then before running the following command, find and replace the placeholder value of TARGET_REGISTRY in file, ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yaml with the actual registry where you want to mirror the images. For example, replace TARGET_REGISTRY with myregistry.com/mynamespace.

    Run the following command to create ImageContentSourcePolicy:

       oc apply -f  ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yaml
    

    If you are using Red Hat OpenShift Container Platform version 4.7 or earlier, this step might cause your cluster nodes to drain and restart sequentially to apply the configuration changes.

  3. Verify that the ImageContentSourcePolicy resource is created.

    oc get imageContentSourcePolicy
    
  4. Verify your cluster node status and wait for all the nodes to be restarted before proceeding.

    oc get MachineConfigPool
    
    $ oc get MachineConfigPool -w
    NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
    master   rendered-master-53bda7041038b8007b038c08014626dc   True      False      False      3              3                   3                     0                      10d
    worker   rendered-worker-b54afa4063414a9038958c766e8109f7   True      False      False      3              3                   3                     0                      10d
    

    After the ImageContentsourcePolicy and global image pull secret are applied, the configuration of your nodes will be updated sequentially. Wait until all MachineConfigPools are in the UPDATED=True status before proceeding.

  5. Go to the project where deployment has to be done:

    Note: You must be logged into a cluster before performing the following steps.
    export NAMESPACE=<YOUR_NAMESPACE>
    
    oc new-project $NAMESPACE
    
  6. Optional: If you use an insecure registry, you must add the target registry to the cluster insecureRegistries list.

    oc patch image.config.openshift.io/cluster --type=merge \
    -p '{"spec":{"registrySources":{"insecureRegistries":["'${TARGET_REGISTRY}'"]}}}'
    
  7. Verify your cluster node status and wait for all the nodes to be restarted before proceeding.

    oc get MachineConfigPool -w
    

    After the ImageContentsourcePolicy and global image pull secret are applied, the configuration of your nodes will be updated sequentially. Wait until all MachineConfigPools are updated.

    At this point your cluster is ready for IBM Connect:Direct for UNIX deployment. The helm chart is present in ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION/charts/ibm-connect-direct-1.2.x.tgz directory. Use it for deployment. Copy it in current directory.

    cp ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION/charts/ibm-connect-direct-1.2.x.tgz .
    Note: Replace with version information in above command.
  8. Configuration required in Helm chart: To use the image mirroring in OpenShift cluster, helm chart should be configured to use the digest value for referring to container image. Set image.digest.enabled to true in values.yaml file or pass this parameter using Helm CLI.
Setting up a repeatable mirroring process

Once you complete a CASE save, you can mirror the CASE as many times as you want to. This approach allows you to mirror a specific version of the IBM Cloud Pak into development, test, and production stages using a private container registry.

Follow the steps in this section if you want to save the CASE to multiple registries (per environment) once and be able to run the CASE in the future without repeating the CASE save process.

  1. Run the following command to save the CASE to ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION which can be used as an input during the mirror manifest generation:
    oc ibm-pak get \
    $CASE_NAME \
    --version $CASE_VERSION
    
  2. Run the oc ibm-pak generate mirror-manifests command to generate the image-mapping.txt:
    oc ibm-pak generate mirror-manifests \
    $CASE_NAME \
    $TARGET_REGISTRY \
    --version $CASE_VERSION
    
    Then add the image-mapping.txt to the oc image mirror command:
    oc image mirror \
      -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \
      --filter-by-os '.*'  \
      -a $REGISTRY_AUTH_FILE \
      --insecure  \
      --skip-multiple-scopes \
      --max-per-registry=1 \
      --continue-on-error=true
    

If you want to make this repeatable across environments, you can reuse the same saved CASE cache (~/.ibm-pak/$CASE_NAME/$CASE_VERSION) instead of executing a CASE save again in other environments. You do not have to worry about updated versions of dependencies being brought into the saved cache.

Applying Pod Security Standard or Creating Pod Security Policy for Kubernetes Cluster

Pod Security Standard should be applied to the namespace if Kubernetes cluster v1.25 and above is used. This helm chart has been certified with baseline security standards with enforce security level. For more details, refer to Pod Security Standards.

  • In Kubernetes the Pod Security Policy (PSP) control is implemented as optional (but recommended). Click here for more information on Pod Security Policy. IBM CCS requires a custom Pod Security Policy (PSP) which defines the minimum set of permissions/capabilities needed to deploy this helm chart and the Connect Direct for Unix services to function properly. This is the recommended PSP for this chart and it can be created by the cluster administrator. The cluster administrator can either use the snippets given below or the scripts provided in the Helm chart to create the PSP, cluster role and tie it to the namespace where deployment will be performed. In both the cases, same PSP and cluster role will be created. It is recommended to use the scripts in the Helm chart so that required PSP and cluster role is created without any issue.
    Attention: If Ordinary User Mode (OUM) feature is enabled, PSP will be slightly different. For more information, look for the SCC below.
  • Below is the Custom PodSecurityPolicy snippet for CDU operating in Ordinary User Mode. For more information, refer to Standard User Mode in IBM Connect:Direct for Unix Containers.
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
    name: ibm-connect-direct-psp
    labels:
    app: "ibm-connect-direct-psp"
    spec:
      privileged: false
      allowPrivilegeEscalation: true
      hostPID: false
      hostIPC: false
      hostNetwork: false
      requiredDropCapabilities:
      allowedCapabilities:
      - SETGID
      - SETUID
      - DAC_OVERRIDE
      - AUDIT_WRITE
      allowedHostPaths:
      runAsUser:
      rule: MustRunAsNonRoot
      runAsGroup:
      rule: MustRunAs ranges: - min: 1 max: 4294967294
      seLinux:
      rule: RunAsAny
      supplementalGroups:
      rule: MustRunAs ranges: - min: 1 max: 4294967294
      fsGroup:
      rule: MustRunAs ranges: - min: 1 max: 4294967294
      volumes:
      - configMap
      - downwardAPI
      - emptyDir
      - nfs
      - persistentVolumeClaim
      - projected
      - secret
      forbiddenSysctls:
      '*'
  • Below is the Custom PodSecurityPolicy snippet for CDU operating in Super User Mode:
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
    name: ibm-connect-direct-psp
    labels:
    app: "ibm-connect-direct-psp"
    spec:
      privileged: false
      allowPrivilegeEscalation: true
      hostPID: false
      hostIPC: false
      hostNetwork: false
      requiredDropCapabilities:
      allowedCapabilities:
      - CHOWN
      - FOWNER
      - SETGID
      - SETUID
      - DAC_OVERRIDE
      - AUDIT_WRITE
      - SYS_CHROOT
      allowedHostPaths:
      runAsUser:
      rule: MustRunAsNonRoot
      runAsGroup:
      rule: MustRunAs ranges: - min: 1 max: 4294967294
      seLinux:
      rule: RunAsAny
      supplementalGroups:
      rule: MustRunAs ranges: - min: 1 max: 4294967294
      fsGroup:
      rule: MustRunAs ranges: - min: 1 max: 4294967294
      volumes:
      - configMap
      - downwardAPI
      - emptyDir
      - nfs
      - persistentVolumeClaim
      - projected
      - secret
      forbiddenSysctls:
      '*'
    
  • Custom ClusterRole for the custom PodSecurityPolicy
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: "ibm-connect-direct-psp"
      labels:
        app: "ibm-connect-direct-psp"
    rules:
    - apiGroups:
      - policy
      resourceNames:
      - ibm-connect-direct-psp
      resources:
      - podsecuritypolicies
      verbs:
      - use
  • From the command line, you can run the setup scripts included in the Helm chart as cluster admin (untar the downloaded Helm chart archive).
    ibm-connect-direct/ibm_cloud_pak/pak_extensions/pre-install/clusterAdministration/createSecurityClusterPrereqs.sh
    <pass 0 or 1 to disable/enable OUM feature>
    ibm-connect-direct/ibm_cloud_pak/pak_extensions/pre-install/namespaceAdministration/createSecurityNamespacePrereqs.sh <Namespace where deployment will be performed>
    Note: If the above scripts are not executable, you will need to make the scripts executable by executing following commands:
    chmod u+x ibm-connect-direct/ibm_cloud_pak/pak_extensions/pre-install/namespaceAdministration/
    createSecurityNamespacePrereqs.sh
    chmod u+x ibm-connect-direct/ibm_cloud_pak/pak_extensions/pre-install/clusterAdministration/
    createSecurityClusterPrereqs.sh

Creating security context constraints for Red Hat OpenShift Cluster

  • The IBM Connect:Direct for Unix chart requires an SecurityContextConstraints (SCC) to be tied to the target namespace prior to deployment.
    Based on your organization security policy, you may need to decide the security context constraints for your OpenShift cluster. This chart has been verified on privileged SCC which comes with Redhat OpenShift. For more info, please refer to this link.
    IBM CCS requires a custom SCC which is the minimum set of permissions/capabilities needed to deploy this helm chart and the Connect Direct for Unix services to function properly. It is based on the predefined restricted SCC with extra required privileges. This is the recommended SCC for this chart and it can be created by the cluster administrator. The cluster administrator can either use the snippets given below or the scripts provided in the Helm chart to create the SCC, cluster role and tie it to the project where deployment will be performed. In both the cases, same SCC and cluster role will be created. It is recommended to use the scripts in the Helm chart so that required SCC and cluster role is created without any issue.
    Attention: If Standard User Mode feature is enabled, PSP will be slightly different. For more information, look for the SCC below.
  • Below is the Custom SecurityContextConstraints snippet for Connect Direct for Unix operating in Standard User Mode. Fore more information, refer to Standard User Mode in IBM Connect:Direct for Unix Containers.
    apiVersion: security.openshift.io/v1
    kind: SecurityContextConstraints
    metadata:
      name: ibm-connect-direct-scc
      labels:  
        app: "ibm-connect-direct-scc"
    allowHostDirVolumePlugin: false
    allowHostIPC: false
    allowHostNetwork: false
    allowHostPID: false
    allowHostPorts: false
    allowPrivilegedContainer: false
    allowPrivilegeEscalation: true
    allowedCapabilities:
    - SETUID
    - SETGID
    - DAC_OVERRIDE
    - AUDIT_WRITE
    defaultAddCapabilities: []
    defaultAllowPrivilegeEscalation: false
    forbiddenSysctls:
    - "*"
    fsGroup:  
      type: MustRunAs
      ranges:
      - min: 1
      max: 4294967294
      readOnlyRootFilesystem: false
    requiredDropCapabilities:
    - ALL
    runAsUser:
      type: MustRunAsNonRoot
      seLinuxContext:
      type: MustRunAs
      supplementalGroups:
      type: MustRunAs
    ranges:
      - min: 1
      max: 4294967294
    volumes:
    - configMap
    - downwardAPI
    - emptyDir
    - nfs
    - persistentVolumeClaim
    - projected
    - secret
    priority: 0
  • Below is the Custom SecurityContextConstraints snippet for Connect Direct for Unix operating in Super User Mode.
    apiVersion: security.openshift.io/v1
    kind: SecurityContextConstraints
    metadata:
      name: ibm-connect-direct-scc
      labels:
        app: "ibm-connect-direct-scc"
    allowHostDirVolumePlugin: false
    allowHostIPC: false
    allowHostNetwork: false
    allowHostPID: false
    allowHostPorts: false
    allowPrivilegedContainer: false
    allowPrivilegeEscalation: true
    allowedCapabilities:
    - FOWNER
    - SETUID
    - SETGID
    - DAC_OVERRIDE
    - CHOWN
    - SYS_CHROOT
    - AUDIT_WRITE
    defaultAddCapabilities: []
    defaultAllowPrivilegeEscalation: false
    forbiddenSysctls:
    - "*"
    fsGroup:
      type: MustRunAs
      ranges:
      - min: 1
        max: 4294967294
    readOnlyRootFilesystem: false
    requiredDropCapabilities:
    - ALL
    runAsUser:
      type: MustRunAsNonRoot
    seLinuxContext:
      type: MustRunAs
    supplementalGroups:
      type: MustRunAs
      ranges:
      - min: 1
        max: 4294967294
    volumes:
    - configMap
    - downwardAPI
    - emptyDir
    - nfs
    - persistentVolumeClaim
    - projected
    - secret
       priority: 0
  • Custom ClusterRole for the custom SecurityContextConstraints
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: "ibm-connect-direct-scc"
      labels:
        app: "ibm-connect-direct-scc"
    rules:
    - apiGroups:
      - security.openshift.io
      resourceNames:
      - ibm-connect-direct-scc
      resources:
      - securitycontextconstraints
      verbs:
      - use
  • From the command line, you can run the setup scripts included in the Helm chart (untar the downloaded Helm chart archive).
    ibm-connect-direct/ibm_cloud_pak/pak_extensions/pre-install/clusterAdministration/createSecurityClusterPrereqs.sh
    <pass 0 or 1 to disable/enable OUM feature>
    ibm-connect-direct/ibm_cloud_pak/pak_extensions/pre-install/namespaceAdministration/createSecurityNamespacePrereqs.sh <Project name where deployment will be perfromed>
    Note: If the above scripts are not executable, you will need to make the scripts executable by executing following commands:
    chmod u+x ibm-connect-direct/ibm_cloud_pak/pak_extensions/pre-install/namespaceAdministration/
    createSecurityNamespacePrereqs.sh
    chmod u+x ibm-connect-direct/ibm_cloud_pak/pak_extensions/pre-install/clusterAdministration/
    createSecurityClusterPrereqs.sh

Creating storage for Data Persistence

The containers are ephemeral entity, all the data inside the container will be lost when the containers are destroyed/removed, so data must be saved to Storage Volume using Persistent Volume. Persistent volume is recommended for Connect:Direct for UNIX storing application data files. A Persistent Volume (PV) is a piece of storage in the cluster that is provisioned by an administrator or dynamic provisioner using storage classes. For more information see:
IBM Certified Container Software for CDU supports:
  • Dynamic Provisioning using storage classes
  • Pre-created Persistent Volume
  • Pre-created Persistent Volume Claim
  • The only supported access mode is `ReadWriteOnce`

Storage Class

You can create a Storage Class to support dynamic provisioning. Refer to the YAML template below for creating a Storage Class in an Azure Kubernetes Cluster and customize it as per your requirements.

kind: StorageClass

apiVersion: storage.k8s.io/v1

metadata:

  name: azurefile-sc-fips

provisioner: file.csi.azure.com

reclaimPolicy: Delete

volumeBindingMode: Immediate

allowVolumeExpansion: true

parameters:

  skuName: Premium_LRS

  protocol: nfs

This Storage Class uses the provisioner file.csi.azure.com, with skuName set to Premium_LRS and protocol set to nfs.

Note: The SKU is set to Premium_LRS in the YAML file because Premium SKU is required for NFS. For more information, see Storage class parameters for dynamic PersistentVolumes.

Invoke the following command to create the Storage Class:

kubectl apply -f <StorageClass yaml file>

Dynamic Provisioning

Dynamic provisioning is supported using storage classes. To enable dynamic provisioning use following configuration for helm chart:
  • persistence.useDynamicProvisioning- It must be set to true. By default, it is set to false, which means dynamic provisioning is disabled.
  • pvClaim.storageClassName- The storage class is blank by default. Update this parameter value using valid storage class. Consult your cluster administrator for available storage class as required by this chart.
  • secret.certSecretName- Specify the certificate secret required for Secure plus configuration or LDAP support. Update this parameter with valid certificate secret. Refer Creating secret for more information.

Non-Dynamic Provisioning

Non-Dynamic Provisioning is supported using pre-created Persistent Volume and pre-created Persistent Volume Claim. The Storage Volume should have Connect:Direct for UNIX secure plus certificate files to be used for installation. Create a directory named "CDFILES" inside mount path and place certificate files in the created directory. Similarly, the LDAP certificates should be placed in same directory.

Using pre-created Persistent Volume- When creating Persistent Volume, make a note of the storage class and metadata labels, that are required to configure Persistent Volume Claim's storage class and label selector during deployment. This ensures that the claims are bound to Persistent Volume based on label match. These labels can be passed to helm chart either by --set flag or custom values.yaml file. The parameters defined invalues.yaml for label name and its value are pvClaim.selector.label and pvClaim.selector.value respectively.

Refer below yaml templates for Persistent Volume creation. Customize as per your requirement. Example: Create Persistent volume using NFS server
kind: PersistentVolume
apiVersion: v1
metadata:
  name: <persistent volume name> 
  labels:
    app.kubernetes.io/name: <persistent volume name>
    app.kubernetes.io/instance: <release name>
    app.kubernetes.io/managed-by: <service name>
    helm.sh/chart: <chart name>
    release: <release name>
    purpose: cdconfig
spec:
  storageClassName: <storage classname>
  capacity:
    storage: <storage size>
  accessModes:
    - ReadWriteOnce
  nfs:
    server: <NFS server IP address>
    path: <mount path>
Invoke the following command to create a Persistent Volume:
Kubernetes :
kubectl create -f <peristentVolume yaml file>
OpenShift:
oc create -f <peristentVolume yaml file>

Using pre-created Persistent Volume Claim (PVC)- The existing PVC can also be used for deployment. The PV for PVC should have the certificate files as required for Connect:Direct for UNIX secure plus or LDAP TLS configuration. The parameter for pre-created PVC is pvClaim.existingClaimName. One should pass a valid PVC name to this parameter else deployment would fail.

Apart from required Persistent Volume, you can bind extra storage mounts using the parameters provided in values.yaml. These parameters are extraVolume and extraVolumeMounts. This can be a host path or a NFS type.

The deployment mounts following configuration/resource directories on the Persistent Volume -
  • <install_dir>/work
  • <install_dir>/ndm/security
  • <install_dir>/ndm/cfg
  • <install_dir>/ndm/secure+
  • <install_dir>/process
  • <install_dir>/file_agent/config
  • <install_dir>/file_agent/log
When deployment is upgraded or pod is recreated in Kubernetes based cluster then, only the data of above directories are saved/persisted on Persistent Volume.

Setting permission on storage

When shared storage is mounted on a container, it is mounted with same POSIX ownership and permission present on exported NFS directory. The mounted directories on container may not have correct owner and permission needed to perform execution of scripts/binaries or writing to them. This situation can be handled as below -
  • Option A: The easiest and undesirable solution is to have open permissions on the NFS exported directories.
     chmod -R 777 <path-to-directory>
  • Option B: Alternatively, the permissions can be controlled at group level leveraging the supplementalGroups and fsGroup setting. For example - if we want to add GID to supplementalGroups or fsGroup, it can be done using storageSecurity.supplementalGroups or storageSecurity.fsGroup.
Apart from above recommendation, during deployment, a default Connect:Direct admin user cdadmin with group cdadmin is created. The default UID and GID of cdadmin is 45678. A non-admin Connect:Direct user is also created as appuser with default UID and GID set to 45679.

Root Squash NFS support

Root squash NFS is secure NFS share when root privileges are shrinked similar to unprivileged user. Also, this user is mapped to nfsnobody or nobody user on the system. So, you cannot perform operations like changing the ownership of any files/directories.

Connect:Direct for UNIX
helm chart can be deployed on root squash NFS. Since, the ownership of files/directories mounted in container would be mounted as nfsnobody or nobody. The POSIX group ID of the root squash NFS share should be added to Supplemental Group list statefulset using storageSecurity.supplementalGroup in values.yaml file. Similarly, if extra NFS share is mounted then proper read/write permission can be provide to container user using supplemental groups only.

Creating secret

Passwords are used for KeyStore, by Administrator to connect to Connect:Direct server, and to decrypt certificates files.

To separate application secrets from the Helm Release, a Kubernetes secret must be created based on the examples given below and be referenced in the Helm chart as secret.secretName value.

To create Secrets using the command line, follow the steps below:
  1. Create a template file with Secret defined as described in the example below:
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret name>
    type: Opaque
    data:
      admPwd: <base64 encoded password>
      crtPwd: <base64 encoded password>
      keyPwd: <base64 encoded password>
      appUserPwd: <base64 encoded password>
    Here:
    • admPwd refers to the password that will be set for the Admin user 'cdadmin' after a successful deployment.
    • crtPwd refers to the passphrase of the identity certificate file passed in cdArgs.crtName for secure plus configuration.
    • keyPwd refers to the Key Store password.
    • appUserPwd refers to password for a non-admin Connect:Direct user. The password for this user is mandatory for IBM Connect:Direct for UNIX operating in Ordinary User Mode.
    • After the secret is created, delete the yaml file for security reasons.
    Note: Base64 encoded passwords need to be generated manually by invoking the below command:
    echo -n “<your desired password>” | base64
    Use the output of this command in the <secret yaml file>.
  2. Run the following command to create the Secret:
    Kubernetes:
    kubectl create -f <secret yaml file>
    OpenShift:
    oc create -f <secret yaml file>
    To check the secret created invoke the following command:
    kubectl get secrets

    For more details see, Secrets.

    Default Kubernetes secrets management has certain security risks as documented here, Kubernetes Security.

    Users should evaluate Kubernetes secrets management based on their enterprise policy requirements and should take steps to harden security.

  3. For dynamic provisioning, one more secret resource needs to be created for all certificates (secure plus certificates and LDAP certificates). It can be created using below example as required:
    Kubernetes:
    kubectl create secret generic cd-cert-secret --from-file=certificate_file1=/path/to/certificate_file1 --from-file=certificate_file2=/path/to/certificate_file2
    OpenShift:
    oc create secret generic cd-cert-secret --from-file=certificate_file1=/path/to/certificate_file1 --from-file=certificate_file2=/path/to/certificate_file2
    Note:
    • The secret resource name created above. It should be referenced by Helm chart for dynamic provisioning using parameter `secret.certSecretName'.
    • For the K8s secret object creation, ensure that the certificate files being used contain the identity certificate. Configure the parameter cdArgs.crtName with the certificate file having the appropriate file extension that corresponds to the identity certificate.

Configuring- Understanding values.yaml

Following table describes configuration parameters listed in values.yaml file in Helm charts and are used to complete installation. Use the following steps to complete this action:
  • Specify parameters that need to be overridden using the --set key=value[,key=value] argument at Helm install.
    Example:
    helm version 2
    
    helm install --name <release-name> \
    --set cdArgs.cport=9898 \
    ...
    ibm-connect-direct-1.3.x.tgz
    helm version 3
    
    helm install <release-name> \
    --set cdArgs.cport=9898 \
    ...
    ibm-connect-direct-1.3.x.tgz
    
  • Alternatively, provide a YAML file with values specified for configurable parameters when you install a Chart. The values.yaml file can be obtained from the helm chart itself using the following command-
    For Online Cluster
    helm inspect values ibm-helm/ibm-connect-direct > my-values.yaml
    For Offline Cluster
    helm inspect values <path to ibm-connect-direct Helm chart> > my-values.yaml
    Now, edit the parameters in my-values.yaml file and use it for installation.
    Example
    helm version 2
    helm install --name <release-name> -f my-values.yaml ... ibm-connect-direct-1.3.x.tgz
    helm version 3
    
    helm install <release-name> -f my-values.yaml ... ibm-connect-direct-1.3.x.tgz
    
  • To mount extra volumes use any of the following templates.

    For Hostpath
    extraVolumeMounts:
      - name: <name>
        mountPath: <path inside container>
    extraVolume:
      - name: <name same as name in extraVolumeMounts>
        hostPath:
          path: <path on host machine>
          type: DirectoryOrCreate
    For NFS Server
    extraVolumeMounts:
      - name: <name>
        mountPath: <path inside container>
    extraVolume:
      - name: <name same as name in extraVolumeMounts>
        nfs:
          path: <nfs data path>
          server: <server ip>

    Alternatively, this can also be done using --set flag.

    Example

    helm install --name <release-name> --set extraVolume[0].name=<name>,extraVolume[0].hostPath.path=<path on host machine>,extraVolume[0].hostPath.type="DirectoryOrCreate",extraVolumeMounts[0].name=<name same as name in extraVolume>,extraVolumeMounts[0].mountPath=<path inside container> \
    ...
    ibm-connect-direct-1.3.x.tgz
    OR
    
    helm install --name <release-name> --set extraVolume[0].name=<name>,extraVolume[0].nfs.path=<nfs data path>,extraVolume[0].nfs.server=<NFS server IP>, extraVolumeMounts[0].name=<name same as name in extraVolume>,extraVolumeMounts[0].mountPath=<path inside container> \
    ...
    ibm-connect-direct-1.3.x.tgz

    If extra volume is mounted, please make sure container user (cdadmin/appuser) has proper read/write permission. The required permissions can be provided to the container user supplemental groups or fs groups as applicable. For example - if an extra NFS share is being mounted where customer user resides and its POSIX group ID 3535, then during deployment add this group ID as supplemental group to ensure container user to be member of this group.

Affinity

The chart provides ways in form of node affinity, pod affinity and pod anti-affinity to configure advance pod scheduling in Kubernetes. See, Kubernetes documentation for details.

Note: For exact parameters, its value and its description, please refer to values.yaml file present in the helm chart itself. Untar the helm chart package to see this file inside chart directory.

Understanding LDAP deployment parameters

This section demonstrates the implementation of the PAM and SSSD configuration with Connect:Direct UNIX to authenticate external user accounts through OpenLDAP.
  • When the LDAP authentication is enabled, the container startup script automatically updates the initparam configuration to support the PAM module. The following line is added to initparam.cfg:
     ndm.pam:service=login:
  • The following default configuration file (/etc/sssd/sssd.conf) is added to the image.
    [domain/default]
    id_provider = ldap
    autofs_provider = ldap
    auth_provider = ldap
    chpass_provider = ldap
    ldap_uri = LDAP_PROTOCOL://LDAP_HOST:LDAP_PORT
    ldap_search_base = LDAP_DOMAIN
    ldap_id_use_start_tls = True
    ldap_tls_cacertdir = /etc/openldap/certs
    ldap_tls_cert = /etc/openldap/certs/LDAP_TLS_CERT_FILE
    ldap_tls_key = /etc/openldap/certs/LDAP_TLS_KEY_FILE
    cache_credentials = True
    ldap_tls_reqcert = allow
  • Description of the Certificates required for the configuration:
    • Mount certificates inside CDU Container:
      • Copy the certificates needed for LDAP configuration in the mapped directory which is used to share the Connect:Direct Unix secure plus certificates (CDFILES/cdcert directory by default).
    • DNS resolution: If TLS is enabled and hostname of LDAP server is passed as “ldap.host”, then it must be ensured that the hostname is resolved inside the container. It is the responsibility of Cluster Administrator to ensure DNS resolution inside pod's container.
    • Certificates creation and configuration: This section provides a sample way to generate the certificates:
      • LDAP_CACERT - The root and all the intermediate CA certificates needs to be copied in one file.
      • LDAP_CLIENT_CERT – The client certificate which the server must be able to validate.
      • LDAP_CLIENT_KEY – The client certificate key.
    • Use the below new parameters for LDAP configuration:
      • ldap.enabled
      • ldap.host
      • ldap.port
      • ldap.domain
      • ldap.tls
      • ldap.startTls
      • ldap.caCert
      • ldap.tlsReqcert
      • ldap.defaultBindDn
      • ldap.defaultAuthtokType
      • ldap.defaultAuthtok
      • ldap.clientValidation
      • ldap.clientCert
      • ldpa.clientKey
      Note:

      The IBM Connect:Direct for UNIX container uses sssd utility for communication with LDAP and the connection between sssd and LDAP server is required to be encrypted.

      TLS configuration is mandatory for user authentication which is required for file transfer using IBM Connect:Direct for UNIX.

Network Policy Change

Out of the box Network Policies

IBM Certified Container Software for Connect:Direct for UNIX comes with predefined network policies based on mandatory security guidelines. By default, all outbound communication is restricted, permitting only intra-cluster communication.

Out-of-the-box Egress Policies:
  1. Deny all Egress Traffic

  2. Allow Egress Traffic within the Cluster

Defining Custom Network Policy

During the deployment of the Helm chart, you have the flexibility to enable or disable network policies. If policies are enabled, a custom egress network policy is essential for communication outside the cluster. Since pods are confined to intra-cluster communication by default, the snippet below illustrates how to define an Egress Network Policy. This can serve as a reference during the Helm chart deployment:
networkPolicyEgress:

enabled: true

acceptNetPolChange: false

# write your custom egress policy here for to spec

to: []

#- namespaceSelector:

# matchLabels:

# name: my-label-to-match

# podSelector:

# matchLabels:

# app.kubernetes.io/name: "connectdirect"

#- podSelector:

# matchLabels:

# role: server

#- ipBlock:

# cidr: <IP Address>/<block size>

# except:

# - <IP Address>/<block size>

#ports:

#- protocol: TCP

# port: 1364

# endPort: 11364
Note:

In the latest release, a new Helm parameter, networkPolicyEgress.acceptNetPolChange, has been introduced. To proceed with the Helm chart upgrade, this parameter must be set to true. By default, it is set to false, and the upgrade won't proceed without this change.

Before this release, there was no Egress Network Policy. The new implementation might impact outbound traffic to external destinations. To mitigate this, a custom policy allowing external traffic needs to be created. Once this policy is in place, you can set the acceptNetPolChange parameter to true and proceed with the upgrade.

If you want to disable the network policy altogether, you can set networkPolicyEgress.enabled to false. Adjust these parameters based on your network and security requirements.

Below is a table containing the supported configurable parameters in the Helm chart.

Parameter Description Default Value
licenseType Specify prod or non-prod for production or non-production license type respectively prod
license License agreement. Set true to accept the license. false
env.extraEnvs Specify extra environment variable if needed  
env.timezone Timezone UTC
arch Node Architecture amd64
replicaCount Number of deployment replicas 1
image.repository Image full name including repository  
image.tag Image tag  
digest.enabled Enable/Disable digest of image to be used false
digest.value The digest value for the image  
image.imageSecrets Image pull secrets  
image.pullPolicy Image pull policy Always
upgradeCompCheck This parameter is intended to acknowledge a change in the system username within the container. Acknowledging this change is crucial before proceeding with the upgrade. false
cdArgs.nodeName Node name cdnode
cdArgs.crtName Certificate file name  
cdArgs.localCertLabel Specify certificate import label in keystore Client-API
cdArgs.cport Client Port 1363
cdArgs.sport Server Port 1364
saclConfig Configuration for SACL n
cdArgs.configDir Directory for storing Connect:Direct configuration files CDFILES
oum.enabled Enable/Disable Ordinary User Mode feature y

storageSecurity.fsGroup

Group ID for File System Group 45678
storageSecurity.supplementalGroups Group ID for Supplemental group 65534
persistence.enabled To use persistent volume true
pvClaim.existingClaimName Provide name of existing PV claim to be used  
persistence.useDynamicProvisioning To use storage classes to dynamically create PV false
pvClaim.accessMode Access mode for PV Claim ReadWriteOnce
pvClaim.storageClassName Storage class of the PVC  
pvClaim.selector.label PV label key to bind this PVC  
pvClaim.selector.value PV label value to bind this PVC  
pvClaim.size Size of PVC volume 100Mi
service.type Kubernetes service type exposing ports LoadBalancer
service.apiport.name API port name api
service.apiport.port API port number 1363
service.apiport.protocol Protocol for service TCP
service.ftport.name Server (File Transfer) Port name ft
service.ftport.port Server (File Transfer) Port number 1364
service.ftport.protocol Protocol for service TCP
service.loadBalancerIP Provide the LoadBalancer IP  
service.loadBalancerSourceRanges Provide Load Balancer Source IP ranges []
service.annotations Provide the annotations for service {}
service.externalTrafficPolicy Specify if external Traffic policy is needed  
service.sessionAffinity Specify session affinity type ClientIP
service.externalIP External IP for service discovery []
networkPolicyIngress.enabled Enable/Disable the ingress policy true
networkPolicyIngress.from Provide from specification for network policy for ingress traffic []
networkPolicyEgress. enabled Enable/Disable egress policy true
networkPolicyEgress.acceptNetPolChange This parameter is to acknowledge the Egress network policy introduction false
secret.certSecretName Name of secret resource of certificate files for dynamic provisioning  
secret.secretName Secret name for Connect:Direct password store  
resources.limits.cpu Container CPU limit 500mi
resources.limits.memory Container memory limit 2000Mi
resources.limits.ephemeral-storage Specify ephemeral storage limit size for pod's container "5Gi"
resources.requests.cpu Container CPU requested 500m
resources.requests.memory Container Memory requested 2000Mi
resources.requests.ephemeral-storage Specify ephemeral storage request size for pod's container "3Gi"
serviceAccount.create Enable/disable service account creation true
serviceAccount.name Name of Service Account to use for container  
extraVolumeMounts Extra Volume mounts  
extraVolume Extra volumes  
affinity.nodeAffinity.required

DuringSchedulingIgnoredDuring

Execution

k8sPodSpec.nodeAffinity.required

DuringSchedulingIgnoredDuring

Execution

 
affinity.nodeAffinity.preferred

DuringSchedulingIgnoredDuring

Execution

k8sPodSpec.nodeAffinity.preferred

DuringSchedulingIgnoredDuring

Execution

 
affinity.podAffinity.required

DuringSchedulingIgnoredDuring

Execution

k8s PodSpec.podAntiAffinity.

requiredDuringSchedulingIgnored

DuringExecution

 
affinity.podAffinity.preferred

DuringSchedulingIgnoredDuring

Execution

k8sPodSpec.podAntiAffinity.

preferredDuringScheduling

IgnoredDuringExecution

 
affinity.podAntiAffinity.required

DuringSchedulingIgnoredDuring

Execution

k8sPodSpec.podAntiAffinity.

requiredDuringSchedulingIgnored

DuringExecution

 
affinity.podAntiAffinity.preferred

DuringSchedulingIgnoredDuring

Execution

k8sPodSpec.podAntiAffinity.

preferredDuringSchedulingIgnored

DuringExecution

 
startupProbe.initialDelaySeconds Initial delay for startup probe 5
startupProbe.timeoutSeconds Timeout for startup probe 5
startupProbe.periodSeconds Time period between startup probes 15
livenessProbe.initialDelaySeconds Initial delay for liveness 45
livenessProbe.timeoutSeconds Timeout for liveness 5
livenessProbe.periodSeconds Time period for liveness 10
readinessProbe.initialDelaySeconds Initial delays for readiness 3
readinessProbe.timeoutSeconds Timeout for readiness 5
readinessProbe.periodSeconds Time period for readiness 10
route.enabled Route for OpenShift Enabled/Disabled false
ldap.enabled Enable/Disable LDAP configuration false
ldap.host LDAP server host  
ldap.port LDAP port  
ldap.domain LDAP Domain  
ldap.tls Enable/Disable LDAP TLS false
ldap.startTls Specify true/false for ldap_id_use_start_tls true
ldap.caCert LDAP CA Certificate name  
ldap.tlsReqcert Specify valid value - never, allow, try, demand, hard never
ldap.defaultBindDn Specify bind DN  
ldap.defaultAuthtokType Specify type of the authentication token of the default bind DN  
ldap.defaultAuthtok Specify authentication token of the default bind DN. Only clear text passwords are currently supported  
ldap.clientValidation Enable/Disable LDAP Client Validation false
ldap.clientCert LDAP Client Certificate name  
ldap.clientKey LDAP Client Certificate key name  
extraLabels Provide extra labels for all resources of this chart {}
cdfa.fileAgentEnable Specify y/n to Enable/Disable File Agent n

Installing IBM Connect:Direct for Unix using Helm chart

After completing all cdu_pre_installation_tasks.html, you can deploy the IBM Certified Container Software for Connect:Direct for UNIX by invoking following command:
Helm version 2

helm install --name my-release --set license=true,image.repository=<reponame> image.tag=<image tag>,cdArgs.crtName=<certificate name>,image.imageSecrets=<image pull secret>,secret.secretName=<C:D secret name> ibm-connect-direct-1.3.x.tgz
or
helm install --name my-release ibm-connect-direct-1.3.x.tgz -f my-values.yaml
Helm version 3

helm install my-release --set license=true,image.repository=<reponame> image.tag=<image tag>,cdArgs.crtName=<certificate name>,image.imageSecrets=<image pull secret>,secret.secretName=<C:D secret name> ibm-connect-direct-1.3.x.tgz
or
helm install my-release ibm-connect-direct-1.3.x.tgz -f my-values.yaml

This command deploys ibm-connect-direct-1.3.x.tgz chart on the Kubernetes cluster using the default configuration. Creating storage for Data Persistence lists parameters that can be configured at deployment.

Mandatory parameters required at the helm install command:
Parameter Description Default Value
license License agreement for IBM Certified Container Software false
image.repository Image full name including repository  
image.tag Image tag  
cdArgs.crtName Key Certificate file name  
image.imageSecrets Image pull secrets  
secret.secretName Secret name for Connect:Direct password store  

Validating the Installation

After the deployment procedure is complete, you should validate the deployment to ensure that everything is working according to your needs. The deployment may take approximately 4-5 minutes to complete.

To validate if the Certified Container Software deployment using Helm charts is successful, invoke the following commands to verify the status (STATUS is DEPLOYED) for a Helm chart with release, my-release and namespace, my-namespace.
  • Check the Helm chart release status by invoking the following command and verify that the STATUS is DEPLOYED:
    helm status my-release
  • Wait for the pod to be ready. To verify the pods status (READY) use the dashboard or through the command line interface by invoking the following command:
    kubectl get pods -l release my-release -n my-namespace -o wide
  • To view the service and ports exposed to enable communication in a pod invoke the following command:
    kubectl get svc -l release= my-release -n my-namespace -o wide

    The screen output displays the external IP and exposed ports under EXTERNAL-IP and PORT(S) column respectively. If external LoadBalancer is not present, refer Master node IP as external IP.

Exposed Services

If required, this chart can create a service of ClusterIP for communication within the cluster. This type can be changed while installing chart using service.type key defined in values.yaml. There are two ports where IBM Connect:Direct processes run. API port (1363) and FT port (1364), whose values can be updated during chart installation using service.apiport.port or service.ftport.port.

IBM Connect:Direct for Unix services for API and file transfer can be accessed using LoadBalancer or external IP and mapped API and FT port. If external LoadBalancer is not present then refer to Master node IP for communication.
Note: NodePort service type is not recommended. It exposes additional security concerns and is hard to manage from both an application and networking infrastructure perspective.

DIME and DARE Security Considerations

This topic provides security recommendations for setting up Data In Motion Encryption (DIME) and Data At Rest Encryption (DARE). It is intended to help you create a secure implementation of the application.

  1. All sensitive application data at rest is stored in binary format so user cannot decrypt it. This chart does not support encryption of user data at rest by default. Administrator can configure storage encryption to encrypt all data at rest.
  2. Data in motion is encrypted using transport layer security (TLS 1.3). For more information see, Secure Plus.

Post-installation tasks

The post deployment configuration steps can be performed via:

• Connect Direct Web Services

  • Login to the Connect Direct Web services using the Load Balancer or External IP address and the port to which container API port (1363) is mapped. For configuration steps, see Connect:Direct Web Services Help Videos.
  • Issue the following command to get the external IP address
    kubectl get svc

Certificate based authentication in Connect Direct Web Services

While connecting to Connect:Direct server with Ordinary User Mode enabled, use certificate based authentication. It will authenticate IBM Sterling Connect:Direct Web Services request to connect to IBM Sterling Connect:Direct for Unix using certificates. There are some simple steps which are divided into two separate sections for Browser's Settings for Certificate Configuration and Certificate Configuration Connect:Direct Web Services/Connect:Direct Unix below which can be followed to enable certificate based login:

Certificate Configuration Connect:Direct Web Services/Connect:Direct Unix:

  1. Login as Admin user into IBM Sterling Connect:Direct Web Services and create a node entry for IBM Sterling Connect:Direct for Unix.
  2. CA certificate is required, this could be same certificate which IBM Sterling Connect:Direct for Unix might be using for authenticating other IBM Sterling Connect:Direct for Unix nodes in file transfer. If CA certificate is chained then separate them into individual certificates and import them to IBM Sterling Connect:Direct Web Services trust store individually.
  3. A key certificate should generated using the command below:
    openssl req -x509 -sha256 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365
    Note: When entering the certificate details, enter the hostname as the common name.
  4. Combine the key.pem and cert.pem to create a singe certificate and import it to IBM Sterling Connect:Direct Web Services key store. Then, import cert.pem to IBM Sterling Connect:Direct for Unix node's.
  5. Then, if there is any key certificate which is used KeyCertLabel in secure plus configuration. Then, that certificate can be separated into two parts comprising of key part and the certificate part. Namely, cert_R.pem and key_R.pem.
  6. Create a fingerprint with the cert_R.pem file using below command:
    openssl x509 -noout -fingerprint -sha256 -inform pem -in cert_R.pem
  7. In IBM Sterling Connect:Direct Web Services, application properties file, configure SSL alias by setting server.ssl.key-alias=combined.pem and configure certificate fingerprint by setting certificate.finger.print=<Certificate Label>;<Fingerprint> and Save the application properties file.
  8. Go to the server and open userfile.cfg file, create an admin user record with client authentication set to y and user name as hostname of the machine.
  9. Restart IBM Sterling Connect:Direct Web Services and refresh the IBM Sterling Connect:Direct Web Services link to see the certificate based authentication menu pop-up and select the node to login to it.

Browser's Settings for Certificate Configuration:

For Certificate-based Authentication from UI, the user needs to add their configured key certificate to a browser’s personal certificate. Since, Google Chrome and Microsoft Edge use Microsoft keystore, you can also add a certificate to the Microsoft keystore directly.

Follow the below steps to add the key certificate used in previous configuration in the Personal Certificates to the browser keystore.
  1. Using Google Chrome browser:
    1. Go to Settings > Privacy & Security > Security, under Advanced, click Manage device certificates to display Certificates window.
    2. Click Import to open Certificate Import Wizard and click Next.
    3. Follow the Certificate Import Wizard to import the certificate.
    4. After successful import, restart the browser.
  2. Using Mozilla Firefox:
    1. Go to Settings > Security, under Certificates, click View Certificates to display Certificate Manager window.
    2. Click Import, select an appropriate key certificate file and enter the password for the same to import the certificate.
    3. Restart the browser after successfully importing the certificate.
  3. Using Microsoft Edge:
    1. Go to Settings > Privacy, search, and services, under Manage Certificates, click View Certificates to display Certificate Manager window.
    2. Click Import to open Certificate Import Wizard.
    3. Follow the Certificate Import Wizard to import the certificate.
    4. After successful import, restart the browser.
  4. After configuring the client certificate, enter the CDWS URL in address bar and press enter, Select a certificate pop-up displayed that shows the client certificate, select the appropriate client certificate.
  5. After selecting the appropriate client certificate, Select a Node pop-up displayed. One selected node must be configured for certificate-based authentication.

• Attaching to the container

Follow the steps given below:
  • Issue the following command to get the pod name
    kubectl get pods -n <namespace>
  • Issue the following command to attach to the container
    kubectl exec -it <pod name> bash

• Restarting Connect:Direct services inside container

  • Sometime, few configurations of IBM Connect:Direct for UNIX needs its services to be restarted. To restart the Connect:Direct service in container, access container terminal either from the command line or OCP dashboard. Inside the container run the below command:
    touch /cdinstall/.cdrecycle
  • After creating this file in the container, CDPMGR services can be stopped by login to direct prompt using the below command:
    /opt/cdunix/ndm/bin/direct
  • Then type stop; and press enter key. Connect:Direct service will be stopped inside container.
  • Verify the changes and then restart CDPMGR process using the below command:
    /opt/cdunix/ndm/bin/cdpmgr -i /opt/cdunix/ndm/cfg/<nodename>/initparm.cfg
    Note: Pass Connect:Direct Nodename of the container in the above command.
  • Consider, liveness and readiness probes configuration before stopping CDPMGR service. If Connect:Direct service remains unavailable beyond liveness and readiness configuration then it would result in pod restart.
  • After Connect:Direct service is restarted, delete the created file using the below command:
    rm -f /cdinstall/.cdrecycle

Upgrade, Rollback, and Uninstall

After deploying IBM certified container software for Connect:Direct for UNIX, you can perform following actions:
  • Upgrade, when you wish to move to a new release
  • Rollback, when you wish to recover the previous release version in case of failure
  • Uninstall, when you wish to uninstall

Upgrade – Upgrading a Release

To upgrade the chart ensure that the pre-installation tasks requirements are in-place on the cluster (Kubernetes or OpenShift). Following things must be considered while following pre-installation tasks requirement for upgrade:
  1. Since, upgrade takes backup of configuration data on the Persistent Volume. Ensure that you have sufficient space available to accommodate the backup and running IBM Connect:Direct for UNIX data. A copy of backup is kept on Persistent Volume to enable rollback in case of upgrade failures.
    Example: The default minimum Persistent Volume size requirement for new deployment is 100Mi. We can just double it for upgrade ie. 200Mi.
  2. Re-run the PodSecurityPolicy/SecurityContextConstraints scripts to ensure that the any new changes are in-place in namespace/project on Kubernetes/OpenShift cluster respectively.
  3. Check the CD secrets are still valid and available on the cluster.
  4. Depending upon the accessibility of the public internet on the cluster. The upgrade procedure can be Online upgrade and Offline upgrade.

Upgrade Consideration

In IBM Certified Container Software for Connect:Direct for UNIX v1.3.2, an important update has been made to enhance clarity and relevance.

For System Username Change

The existing system user, previously named cduser, has been renamed to the more meaningful user named cdadmin. The role of this user remains the same, serving as the Connect:Direct Administrative user.
Important: It's important to note that there is no change to the appuser, and it will continue to exist with its current configuration within the container.

To acknowledge this change a new helm parameter has been introduced named upgradeCompCheck whose default value is false. By default, the older helm releases cannot upgrade to this version without setting this new parameter to true which means you have read this section of document and understood the changes which have been introduced as part of this release.

Since, after upgrade from previous versions of IBM Certified Container Software for Connect:Direct for UNIX cduser (older user) won't exist in the container. So, consider following things:
  1. User Record Update:
    During the upgrade, a user record for cdadmin will be created in the userfile.cfg. Existing cduser records will remain unmodified. Post a successful upgrade and testing, the Connect:Direct admin can delete the obsolete cduser records.
  2. Local ID Update:
    The Connect:Direct admin should manually update any user records where the local.id is set to cduser to reflect the new user, cdadmin.
  3. Script and Client Updates:
    Any scripts, Connect:Direct clients, or configurations using cduser for file transfers should be modified to use the new user, cdadmin.

Network Policy Updates:

To update network polices, refer to https://kubernetes.io/docs/concepts/services-networking/network-policies/ and Network Policy Change.

Online upgrade

You have access to the public internet on the cluster. Thus, you have access to Entitled registry and IBM public GitHub repository. Follow these steps to upgrade the chart with release name my-release:
  1. Update the local repo:
    helm repo update
  2. Download the newer helm chart:
    helm pull ibm-helm/ibm-connect-direct
    The helm chart gets pulled in current directory.
  3. Untar the chart and run the PodSecurityPolicy/SecurityContextConstraints scripts to ensure any new required change is in-place on the cluster. Refer Applying Pod Security Standard or Creating Pod Security Policy for Kubernetes Cluster and Creating security context constraints for Red Hat OpenShift Cluster as applicable on the cluster.
  4. Upgrade the chart.
    helm upgrade my-release ibm-connect-direct-1.3.x.tgz -f myvalues.yaml

Offline upgrade

You don't have access to the public internet on the cluster. Thus, Entitled registry and public IBM GitHub registry cannot be accessed from cluster. So, you need to follow the Offline (Airgap) Cluster procedure to get the installation files. Follow these steps to upgrade the chart with release name my-release:
  1. After you have the installation files, go inside the charts directory used for downloading the files.
    cd <download directory>/charts
    Download directory is directory where the installation files have been downloaded.
  2. Untar the chart and run the PodSecurityPolicy/SecurityContextConstraints scripts to ensure any new required change is in-place on the cluster. Refer Applying Pod Security Standard or Creating Pod Security Policy for Kubernetes Cluster and Creating security context constraints for Red Hat OpenShift Cluster as applicable for the cluster.
  3. Upgrade the chart:
    helm upgrade my-release ibm-connect-direct-1.3.x.tgz -f myvalues.yaml
    Refer steps mentioned in Validating the Installation for validating the upgrade.
Note: For both online and offline processes:
  • Do not change/update the values of Connect Direct configuration parameters.
  • If any new parameters are introduced in the new chart and you are upgrading using new chart. Then, all those new parameters should be either passed with "--set" option or using a yaml file with "-f" option. The parameter can have default values as specified in the new chart or you can change the values as per your configuration requirement.
  • For root squash NFS deployment with custom UID/GID, let's suppose UID/GID was 1010/1010, then while upgrading to 1.3.x helm chart, update supplemental groups with 1010 also. Do not delete supplemental group already present in values.yaml file, just add 1010 also to its list. Then, trigger helm upgrade.
  • The Ordinary User Mode (OUM) feature is not available in older releases of the IBM Sterling Connect:Direct. So, the upgrade from older releases should be done by disabling this feature by setting oum.enabled="n" in values.yaml file to avoid unexpected behavior.

Rollback – Recovering on Failure

Procedure

  1. To rollback a chart with release name my-release to a previous revision invoke the following command:
    helm rollback my-release <previous revision number>
  2. To get the revision number execute the following command:
    helm history my-release
    Note: The rollback is only supported to a previous release. Subsequent rollbacks are not supported.

    Rollback from Connect Direct Unix v6.1 is only supported if it was upgraded from Connect Direct Unix 6.0 iFix026 and later releases.

Uninstall – Uninstalling a Release

To uninstall/delete a Chart with release name my-release invoke the following command:
Helm version 2
helm delete --purge my-release
Helm version 3
helm delete my-release
Note:
This command removes all the Kubernetes components associated with the Chart and deletes the Release. Certain Kubernetes resources created as an installation prerequisite for the Chart and a helm hook ie ConfigMap will not be deleted using the helm delete command. Delete these resources only if they are not required for further deployment of IBM Certified Container Software for Connect:Direct UNIX. If deletion is required, you have to manually delete the following resources:
  • The persistent volume
  • The secret
  • The Config Map

Known Limitations

  • Scalability is supported with the conventional Connect:Direct for Unix release with load balancer service.
  • High availability can be achieved in an orchestrated environment.
  • IBM Connect:Direct for Unix chart is supported with only 1 replica count.
  • IBM Connect:Direct for Unix chart supports only the x64 architecture.
  • Adding extra system users at runtime is not supported.
  • Interaction with IBM Control Center Director is not supported.
  • The container does not include X-Windows support, so SPAdmin cannot run inside the container. SPCli will run in the container and Connect Direct Web Services (CDWS) can be used outside the container in place of using SPAdmin.
  • Extension of Connect:Direct containers by customers is not supported. If you want any modifications or updates to the containers you need to raise an enhancement request.

Migrating to Connect:Direct for UNIX using Certified Container Software

Follow the steps given below to create a backup and restore IBM Connect:Direct for Unix using Certified Container Software:
  1. Create a backup
    To create a backup of configuration data and other information such as stats and TCQ, present in the persistent volume, follow the steps given below:
    1. Go to mount path of Persistent Volume.
    2. Make copy of the following directories and store them at a secured location:
      • WORK
      • SECURITY
      • CFG
      • SECPLUS
      • PROCESS
      • FACONFIG
      • FALOG
        Note:
        • Update the various values in initparm.cfg, netmap.cfg and ndmapi.cfg for the C:D installation path, hostname/IP and port numbers. The installation path is `/opt/cdunix/` , hostname would be <helm-release-name>-ibm-connect-direct-0 and client port is 1363 and server port is 1364.
        • The nodename of Connect:Direct for Unix running on traditional system should be same while migrating it inside container.

        • If Connect:Direct for UNIX is installed in a conventional mode, create a backup of the following directories:
          • <install_dir>/work
          • <install_dir>/ndm/security
          • <install_dir>/ndm/cfg
          • <install_dir>/ndm/secure+
          • <install_dir>/process
          • <install_dir>/file_agent/config
          • <install_dir>/file_agent/log
            A file must be created in a work directory before taking backup. The file can be created by invoking the following command:
            <path to cdunix install directory>/etc/cdver > <path to cdunix install directory>/work/saved_cdunix_version
  2. Restore the data in a new deployment
    To restore data in a new deployment, follow the steps given below:
    1. Create a Persistent Volume.
    2. Copy all the backed-up directories to the mount path of Persistent Volume.
  3. For other prerequisites such as secrets see, cdu_pre_installation_tasks.html.
  4. Upgrade to Certified Container Software
    Create a new instance of chart using the following helm CLI command:
    Helm version 2
    helm install --name <release-name> --set license=true,image.repository=<reponame> image.tag=<image tag>,cdArgs.crtName=<certificate name>,image.imageSecrets=<image pull secret>,secret.secretName=<C:D secret name> ibm-connect-direct-1.3.x.tgz
    Helm version 3
    helm install <release-name> --set license=true,image.repository=<reponame> image.tag=<image tag>,cdArgs.crtName=<certificate name>,image.imageSecrets=<image pull secret>,secret.secretName=<C:D secret name> ibm-connect-direct-1.3.x.tgz
Note: The nodename of Connect:Direct for Unix running on traditional system should be same for migration to IBM Certified Container Software. Configure the nodename using "cdArgs.nodeName" parameter.

Migration from Helm 2 to Helm 3

This migration only applies if you have IBM Certified Container Software instances/releases that are running on Helm 2 and you want to use Helm 3 release. It is the responsibility of the Cluster Administrator to decide on the migration strategy according to deployment requirement and specific use case scenario. Although, there is a detailed and exhaustive documentation provided by the Helm on migration. Please refer the this link, Helm.