Installing

After reviewing the system requirements and other planning information, you can proceed to install IBM Certified Container Software for Connect Direct UNIX.

The following tasks represent the typical task flow for performing the installation:

Setting up your registry server

To install IBM Certified Container Software for Connect:Direct for UNIX, you must have a registry server where you can host the image required for installation.

Using the existing registry server

If you have an existing registry server, you can use it, provided that it is in close proximity to cluster where you will deploy IBM Certified Container Software for Connect:Direct for UNIX. If your registry server is not in close proximity to your cluster, you might notice performance issues. Also, before the installation, ensure that pull secrets are created in the namespace/project and are linked to the service accounts. You will need to properly manage these pull secrets. This pull secret can be updated in values.yaml file `image.imageSecrets`.

Using Docker registry

Kubernetes does not provide a registry solution out of the based. However, you can create your own registry server and host your images. Please refer to the deployment of registry server.

Setting up Namespace or project

To install IBM Certified Container Software for Connect:Direct for UNIX, you must have an existing namespace/project or create a new if required.

You can either use an existing namespace or create a new one in Kubernetes cluster. Similarly, you either use an existing project or create a new one in OpenShift cluster. A namespace or project is a cluster resource. So, it can only be created by a Cluster Administrator. Refer the following links for more details -

For Kubernetes - Namespaces

For Red Hat OpenShift - Working with projects

The IBM Certified Container Software for Connect:Direct for UNIX has been integrated with IBM Licensing and Metering service using Operator. You need to install this service. For more information, refer to License Service deployment without an IBM Cloud Pak.

Installing and configuring IBM Licensing and Metering service

License Service is required for monitoring and measuring license usage of IBM Certified Container Software for Connect:Direct for UNIX in accordance with the pricing rule for containerized environments. Manual license measurements are not allowed. Deploy License Service on all clusters where IBM Certified Container Software for Connect:Direct for UNIX is installed.

The IBM Certified Container Software for Connect:Direct for UNIX contains an integrated service for measuring the license usage at the cluster level for license evidence purposes.

Overview

The integrated licensing solution collects and stores the license usage information which can be used for audit purposes and for tracking license consumption in cloud environments. The solution works in the background and does not require any configuration. Only one instance of the License Service is deployed per cluster regardless of the number of containerized products that you have installed on the cluster.

Deploying License Service

Deploy License Service on each cluster where IBM FHIR Server is installed. License Service can be deployed on any Kubernetes based orchestration cluster. For more information about License Service, how to install and use it, see the License Service documentation.

Validating if License Service is deployed on the cluster

To ensure license reporting continuity for license compliance purposes make sure that License Service is successfully deployed. It is recommended to periodically verify whether it is active. To validate whether License Service is deployed and running on the cluster, you can, for example, log into the Kubernetes or Redhat OpenShift cluster and run the following command:
For Kubernetes
kubectl get pods --all-namespaces | grep ibm-licensing | grep -v operator
For Redhat OpenShift
oc get pods --all-namespaces | grep ibm-licensing | grep -v operator

The following response is a confirmation of successful deployment:

1/1 Running

Archiving license usage data

Remember to archive the license usage evidence before you decommission the cluster where IBM Certified Container Software for Connect:Direct for UNIX Server was deployed. Retrieve the audit snapshot for the period when IBM Certified Container Software for Connect:Direct for UNIX was on the cluster and store it in case of audit.

For more information about the licensing solution, see License Service documentation.

Downloading the Certified Container Software

Before you install IBM Certified Container Software for Connect:Direct for UNIX, ensure that the installation files are available on your client system.

Depending on the availability of internet on the cluster, the following procedures can be followed. Choose the one which applies best for your environment.

Online Cluster

The cluster which has access to the internet is called Online cluster. You may have a Kubernetes or OpenShift cluster and it has access to the internet. The process to get required installation files consists of two steps:
  1. Create the entitled registry secret: Complete the following steps to create a secret with the entitled registry key value:
    1. Ensure that you have obtained the entitlement key that is assigned to your ID.
      1. Log in to My IBM Container Software Library by using the IBM ID and password that are associated with the entitled software.
      2. In the Entitlement keys section, select Copy key to copy the entitlement key to the clipboard.
      3. Save the entitlement key to a safe location for later use.
        To confirm that your entitlement key is valid, click View library that is provided in the left of the page. You can view the list of products that you are entitled to. If IBM Connect:Direct for Unix is not listed, or if the View library link is disabled, it indicates that the identity with which you are logged in to the container library does not have an entitlement for IBM Connect:Direct for Unix. In this case, the entitlement key is not valid for installing the software.
      Note: For assistance with the Container Software Library (e.g. product not available in the library; problem accessing your entitlement registry key), contact MyIBM Order Support.
    2. Set the entitled registry information by completing the following steps:
      1. Log on to machine from where the cluster is accessible
      2. export ENTITLED_REGISTRY=cp.icr.io
      3. export ENTITLED_REGISTRY_USER=cp
      4. export ENTITLED_REGISTRY_KEY=<entitlement_key>
    3. This step is optional. Log on to the entitled registry with the following docker login command:
      docker login "$ENTITLED_REGISTRY" -u "$ENTITLED_REGISTRY_USER" -p "$ENTITLED_REGISTRY_KEY"
      
    4. Create a Docker-registry secret:
       kubectl create secret docker-registry <any_name_for_the_secret> --docker-username=$ENTITLED_REGISTRY_USER --docker-password=$ENTITLED_REGISTRY_KEY --docker-server=$ENTITLED_REGISTRY -n <your namespace/project name>
      
    5. Update the service account or helm chart image pull secret configurations using `image.imageSecrets` parameter with the above secret name.
  2. Download the Helm chart: You can follow the steps below to download the helm chart from the repository.
    1. Make sure that the helm client (CLI) is present on your machine. Execute/run helm CLI on machine and you should be able to see the usage of helm CLI.
      helm
    2. Check the ibm-helm repository in your helm CLI.
      helm repo list
      If the ibm-helm repository already exists with URL https://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm, then update the local repository else add the repository.
    3. Update the local repository, if ibm-helm repository already exists on helm CLI.
      helm repo update
    4. Add the helm chart repository to local helm CLI if it does not exist.
      helm repo add ibm-helm https://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm
    5. List ibm-connect-direct helm charts available on repository.
      helm search repo -l ibm-connect-direct
    6. Download the latest helm chart.
      helm pull ibm-helm/ibm-connect-direct 
      At this point we have a locally present helm chart and an Entitled registry secret. Make sure you configure the helm chart to use the Entitled registry secret to download the required container image for deploying the IBM Connect:Direct for UNIX chart.

Offline (Airgap) Cluster

You have a Kubernetes or OpenShift cluster but it is a private cluster which means it does not have the internet access. Depending upon the cluster, follow the below procedures to get the installation files.

For Kubernetes Cluster

Since, your Kubernetes cluster is private and it does not have internet access, you cannot download the required installation files directly from the server. By following steps below, you can get the required files.
  1. Get an RHEL machine which has
    • internet access
    • Docker CLI (docker) or Podman CLI (podman)
    • kubectl
    • helm
  2. Download the Helm chart by following the steps mentioned in the Online installation section.
  3. Extract the downloaded helm chart.
    tar -zxf <ibm-connect-direct-helm chart-name>
  4. Get the container image detail:
    erRepo=$(grep -w "repository:" ibm-connect-direct/values.yaml |cut -d '"' -f 2)
    erTag=$(grep -w "tag:" ibm-connect-direct/values.yaml | cut -d '"' -f 2)
    erImgTag=$erRepo:$erTag
  5. This step is optional if you already have a docker registry running on this machine. Create a docker registry on this machine. Follow Setting up your registry server.
  6. Get the Entitled registry entitlement key by following steps a and b explained in Online Cluster under Create the entitled registry section.
  7. Get the container image downloaded in docker registry:
    docker login "$ENTITLED_REGISTRY" -u "$ENTITLED_REGISTRY_USER" -p
    "$ENTITLED_REGISTRY_KEY"
    docker pull $erImgTag
    Note: Skip step 8, 9 and 10, if the cluster where deployment will be performed is accessible from this machine and cluster can fetch container images from registry running on this machine.
  8. Save the container image.
    docker save -o <container image file name.tar> $erImgTag
  9. Copy/Transfer the installation files to your cluster. At this point you have both downloaded container image and helm chart for IBM Connect:Direct for UNIX. You need to transfer these two file to a machine from where you can access your cluster and its registry.
  10. After transferring the files, load the container image into your registry.
    docker load -i <container image file name.tar>

For Red Hat OpenShift Cluster

If your cluster is not connected to the internet, the deployment can be done in your cluster via connected or disconnected mirroring.

If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring. If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring.

Before you begin

You must complete the steps in the following sections before you begin generating mirror manifests:
Important: If you intend on installing using a private container registry, your cluster must support ImageContentSourcePolicy (ICSP).

Prerequisites

Regardless of whether you plan to mirror the images with a bastion host or to the file system, you must satisfy the following prerequisites:
  • Red Hat® OpenShift® Container Platform requires you to have cluster admin access to run the deployment.
  • A Red Hat® OpenShift® Container Platform cluster must be installed.

Prepare a host

If you are in an air-gapped environment, you must be able to connect a host to the internet and mirror registry for connected mirroring or mirror images to file system which can be brought to a restricted environment for disconnected mirroring. For information on the latest supported operating systems, see ibm-pak plugin install documentation.

The following table explains the software requirements for mirroring the IBM Cloud Pak images:
Table 1. software requirements for mirroring the IBM Cloud Pak images
Software Purpose
Docker Container management
Podman Container management
Red Hat OpenShift CLI (oc) Red Hat OpenShift Container Platform administration
Complete the following steps on your host:
  1. Install Docker or Podman.
    To install Docker (for example, on Red Hat® Enterprise Linux®), run the following commands:
    Note: If you are installing as a non-root user you must use sudo. For more information, refer to the Podman or Docker documentation for installing as a non-root user.
    yum check-update
    yum install docker
    

    To install Podman, see Podman Installation Instructions.

  2. Install the oc Red Hat® OpenShift® Container Platform CLI tool.
  3. Download and install the most recent version of IBM Catalog Management Plug-in for IBM Cloud Paks from the IBM/ibm-pak. Extract the binary file by entering the following command:
    tar -xf oc-ibm_pak-linux-amd64.tar.gz
    Run the following command to move the file to the /usr/local/bin directory:
    Note: If you are installing as a non-root user you must use sudo. For more information, refer to the Podman or Docker documentation for installing as a non-root user.
    mv oc-ibm_pak-linux-amd64 /usr/local/bin/oc-ibm_pak
    Note: Download the plug-in based on the host operating system. You can confirm that oc ibm-pak -h is installed by running the following command:
    oc ibm-pak --help

    The plug-in usage is displayed.

    For more information on plug-in commands, see command-help.

Your host is now configured and you are ready to mirror your images.

Creating registry namespaces

Top-level namespaces are the namespaces which appear at the root path of your private registry. For example, if your registry is hosted at myregistry.com:5000, then mynamespace in myregistry.com:5000/mynamespace is defined as a top-level namespace. There can be many top-level namespaces.

When the images are mirrored to your private registry, it is required that the top-level namespace where images are getting mirrored already exists or can be automatically created during the image push. If your registry does not allow automatic creation of top-level namespaces, you must create them manually.

When you generate mirror manifests, you can specify the top-level namespace where you want to mirror the images by setting TARGET_REGISTRY to myregistry.com:5000/mynamespace which has the benefit of needing to create only one namespace mynamespace in your registry if it does not allow automatic creation of namespaces. The top-level namespaces can also be provided in the final registry by using --final-registry.

If you do not specify your own top-level namespace, the mirroring process will use the ones which are specified by the CASEs. For example, it will try to mirror the images at myregistry.com:5000/cp etc.

So if your registry does not allow automatic creation of top-level namespaces and you are not going to use your own during generation of mirror manifests, then you must create the following namespace at the root of your registry.
  • cp

There can be more top-level namespaces that you might need to create. See section on Generate mirror manifests for information on how to use the oc ibm-pak describe command to list all the top-level namespaces.

Set environment variables and download CASE files

If your host must connect to the internet via a proxy, you must set environment variables on the machine that accesses the internet via the proxy server.

If you are mirroring via connected mirroring, set the following environment variables on the machine that accesses the internet via the proxy server:
export https_proxy=http://proxy-server-hostname:port
export http_proxy=http://proxy-server-hostname:port

# Example:
export https_proxy=http://server.proxy.xyz.com:5018
export http_proxy=http://server.proxy.xyz.com:5018
Before mirroring your images, you can set the environment variables on your mirroring device, and connect to the internet so that you can download the corresponding CASE files. To finish preparing your host, complete the following steps:
Note: Save a copy of your environment variable values to a text editor. You can use that file as a reference to cut and paste from when you finish mirroring images to your registry.
  1. Create the following environment variables with the installer image name and the version.
    export CASE_NAME=ibm-connect-direct

    To find the CASE name and version, see IBM: Product CASE to Application Version.

  2. Connect your host to the intranet.
  3. The plug-in can detect the locale of your environment and provide textual helps and messages accordingly. You can optionally set the locale by running the following command:
    oc ibm-pak config locale -l LOCALE

    where LOCALE can be one of de_DE, en_US, es_ES, fr_FR, it_IT, ja_JP, ko_KR, pt_BR, zh_Hans, zh_Hant.

  4. Configure the plug-in to download CASEs as OCI artifacts from IBM Cloud Container Registry (ICCR).
    oc ibm-pak config repo 'IBM Cloud-Pak OCI registry' -r oci:cp.icr.io/cpopen --enable
  5. Enable color output (optional with v1.4.0 and later)
    oc ibm-pak config color --enable true
  6. Download the image inventory for your IBM Cloud Pak to your host.
    Tip: If you do not specify the CASE version, it will download the latest CASE.
    oc ibm-pak get \
    $CASE_NAME \
    --version $CASE_VERSION
    

By default, the root directory used by plug-in is ~/.ibm-pak. This means that the preceding command will download the CASE under ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION. You can configure this root directory by setting the IBMPAK_HOME environment variable. Assuming IBMPAK_HOME is set, the preceding command will download the CASE under $IBMPAK_HOME/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION.

The logs files will be available at $IBMPAK_HOME/.ibm-pak/logs/oc-ibm_pak.log.

Your host is now configured and you are ready to mirror your images.

Mirroring images to your private container registry

The process of mirroring images takes the image from the internet to your host, then effectively copies that image to your private container registry. After you mirror your images, you can configure your cluster and complete air-gapped installation.

Complete the following steps to mirror your images from your host to your private container registry:
  1. Generate mirror manifests
  2. Authenticating the registry
  3. Mirror images to final location
  4. Configure the cluster
  5. Install IBM Cloud® Paks by way of Red Hat OpenShift Container Platform
Generate mirror manifests
Note:
  • If you want to install subsequent updates to your air-gapped environment, you must do a CASE get to get the image list when performing those updates. A registry namespace suffix can optionally be specified on the target registry to group mirrored images.

  • Define the environment variable $TARGET_REGISTRY by running the following command:
    export TARGET_REGISTRY=<target-registry>
    

    The <target-registry> refers to the registry (hostname and port) where your images will be mirrored to and accessed by the oc cluster. For example setting TARGET_REGISTRY to myregistry.com:5000/mynamespace will create manifests such that images will be mirrored to the top-level namespace mynamespace.

  • Run the following commands to generate mirror manifests to be used when mirroring from a bastion host (connected mirroring):
    oc ibm-pak generate mirror-manifests \
       $CASE_NAME \
       $TARGET_REGISTRY \
       --version $CASE_VERSION
    
    Example ~/.ibm-pak directory structure for connected mirroring
    The ~/.ibm-pak directory structure is built over time as you save CASEs and mirror. The following tree shows an example of the ~/.ibm-pak directory structure for connected mirroring:
    tree ~/.ibm-pak
    /root/.ibm-pak
    ├── config
    │   └── config.yaml
    ├── data
    │   ├── cases
    │   │   └── YOUR-CASE-NAME
    │   │       └── YOUR-CASE-VERSION
    │   │           ├── XXXXX
    │   │           ├── XXXXX
    │   └── mirror
    │       └── YOUR-CASE-NAME
    │           └── YOUR-CASE-VERSION
    │               ├── catalog-sources.yaml
    │               ├── image-content-source-policy.yaml
    │               └── images-mapping.txt
    └── logs
       └── oc-ibm_pak.log
    

    Notes: A new directory ~/.ibm-pak/mirror is created when you issue the oc ibm-pak generate mirror-manifests command. This directory holds the image-content-source-policy.yaml, images-mapping.txt, and catalog-sources.yaml files.

    Tip: If you are using a Red Hat® Quay.io registry and need to mirror images to a specific organization in the registry, you can target that organization by specifying:
       export ORGANIZATION=<your-organization>
       oc ibm-pak generate mirror-manifests
       $CASE_NAME
       $TARGET_REGISTRY/$ORGANIZATION
       --version $CASE_VERSION
    
You can also generate manifests to mirror images to an intermediate registry server, then mirroring to a final registry server. This is done by passing the final registry server as an argument to --final-registry:
   oc ibm-pak generate mirror-manifests \
      $CASE_NAME \
      $INTERMEDIATE_REGISTRY \
      --version $CASE_VERSION
      --final-registry $FINAL_REGISTRY

In this case, in place of a single mapping file (images-mapping.txt), two mapping files are created.

  1. images-mapping-to-registry.txt
  2. images-mapping-from-registry.txt
  1. Run the following commands to generate mirror manifests to be used when mirroring from a file system (disconnected mirroring):
    oc ibm-pak generate mirror-manifests \
       $CASE_NAME \
       file://local \
       --final-registry $TARGET_REGISTRY
    
    Example ~/.ibm-pak directory structure for disconnected mirroring
    The following tree shows an example of the ~/.ibm-pak directory structure for disconnected mirroring:
    tree ~/.ibm-pak
    /root/.ibm-pak
    ├── config
    │   └── config.yaml
    ├── data
    │   ├── cases
    │   │   └── ibm-cp-common-services
    │   │       └── 1.9.0
    │   │           ├── XXXX
    │   │           ├── XXXX
    │   └── mirror
    │       └── ibm-cp-common-services
    │           └── 1.9.0
    │               ├── catalog-sources.yaml
    │               ├── image-content-source-policy.yaml
    │               ├── images-mapping-to-filesystem.txt
    │               └── images-mapping-from-filesystem.txt
    └── logs
       └── oc-ibm_pak.log
    
    Note: A new directory ~/.ibm-pak/mirror is created when you issue the oc ibm-pak generate mirror-manifests command. This directory holds the image-content-source-policy.yaml, images-mapping-to-filesystem.txt, images-mapping-from-filesystem.txt, and catalog-sources.yaml files.
Tip: Some products support the ability to generate mirror manifests only for a subset of images using the --filter argument and image grouping. The --filter argument provides the ability to customize which images are mirrored during an air-gapped installation. As an example for this functionality ibm-cloud-native-postgresql CASE can be used, which contains groups that allow mirroring specific variant of ibm-cloud-native-postgresql (Standard or Enterprise). Use the --filter argument to target a variant of ibm-cloud-native-postgresql to mirror rather than the entire library. The filtering can be applied for groups and architectures. Consider the following command:
   oc ibm-pak generate mirror-manifests \
      ibm-cloud-native-postgresql \
      file://local \
      --final-registry $TARGET_REGISTRY \
      --filter $GROUPS

The command was updated with a --filter argument. For example, for $GROUPS equal to ibmEdbStandard the mirror manifests will be generated only for the images associated with ibm-cloud-native-postgresql in its Standard variant. The resulting image group consists of images in the ibm-cloud-native-postgresql image group as well as any images that are not associated with any groups. This allows products to include common images as well as the ability to reduce the number of images that you need to mirror.

Note: You can use the following command to list all the images that will be mirrored and the publicly accessible registries from where those images will be pulled from:
   oc ibm-pak describe $CASE_NAME --version $CASE_VERSION --list-mirror-images
Tip: The output of the preceding command will have two sections:
  1. Mirroring Details from Source to Target Registry
  2. Mirroring Details from Target to Final Registry. A connected mirroring path that does not involve a intermediate registry will only have the first section.

    Note down the Registries found sub sections in the preceding command output. You will need to authenticate against those registries so that the images can be pulled and mirrored to your local registry. See the next steps on authentication. The Top level namespaces found section shows the list of namespaces under which the images will be mirrored. These namespaces should be created manually in your registry (which appears in the Destination column in the above command output) root path if your registry does not allow automatic creation of namespaces.

Authenticating the registry

Complete the following steps to authenticate your registries:

  1. Store authentication credentials for all source Docker registries.

    Your product might require one or more authenticated registries. The following registries require authentication:

    • cp.icr.io
    • registry.redhat.io
    • registry.access.redhat.com

    You must run the following command to configure credentials for all target registries that require authentication. Run the command separately for each registry:

    Note: The export REGISTRY_AUTH_FILE command only needs to run once.
    export REGISTRY_AUTH_FILE=<path to the file which will store the auth credentials generated on podman login>
    podman login <TARGET_REGISTRY>
    
    Important: When you log in to cp.icr.io, you must specify the user as cp and the password which is your Entitlement key from the IBM Cloud Container Registry. For example:
    podman login cp.icr.io
    Username: cp
    Password:
    Login Succeeded!
    

For example, if you export REGISTRY_AUTH_FILE=~/.ibm-pak/auth.json, then after performing podman login, you can see that the file is populated with registry credentials.

If you use docker login, the authentication file is typically located at $HOME/.docker/config.json on Linux or %USERPROFILE%/.docker/config.json on Windows. After docker login you should export REGISTRY_AUTH_FILE to point to that location. For example in Linux you can issue the following command:
export REGISTRY_AUTH_FILE=$HOME/.docker/config.json
Table 2. Table 2. Directory description
Directory Description
~/.ibm-pak/config Stores the default configuration of the plug-in and has information about the public GitHub URL from where the cases are downloaded.
~/.ibm-pak/data/cases This directory stores the CASE files when they are downloaded by issuing the oc ibm-pak get command.
~/.ibm-pak/data/mirror This directory stores the image-mapping files, ImageContentSourcePolicy manifest in image-content-source-policy.yaml and CatalogSource manifest in one or more catalog-sourcesXXX.yaml. The files images-mapping-to-filesystem.txt and images-mapping-from-filesystem.txt are input to the oc image mirror command, which copies the images to the file system and from the file system to the registry respectively.
~/.ibm-pak/data/logs This directory contains the oc-ibm_pak.log file, which captures all the logs generated by the plug-in.
Mirror images to final location

Complete the steps in this section on your host that is connected to both the local Docker registry and the Red Hat® OpenShift® Container Platform cluster.

  1. Mirror images to the final location.

    • For mirroring from a bastion host (connected mirroring):

      Mirror images to the TARGET_REGISTRY:
       oc image mirror \
         -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \
         --filter-by-os '.*'  \
         -a $REGISTRY_AUTH_FILE \
         --insecure  \
         --skip-multiple-scopes \
         --max-per-registry=1 \
         --continue-on-error=true
      

      If you generated manifests in the previous steps to mirror images to an intermediate registry server followed by a final registry server, run the following commands:

      1. Mirror images to the intermediate registry server:
        oc image mirror \
          -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-registry.txt \
          --filter-by-os '.*'  \
          -a $REGISTRY_AUTH_FILE \
          --insecure  \
          --skip-multiple-scopes \
          --max-per-registry=1 \
          --continue-on-error=true
        
      2. Mirror images from the intermediate registry server to the final registry server:
        oc image mirror \
          -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-registry.txt \
          --filter-by-os '.*'  \
          -a $REGISTRY_AUTH_FILE \
          --insecure  \
          --skip-multiple-scopes \
          --max-per-registry=1 \
          --continue-on-error=true
        

        The oc image mirror --help command can be run to see all the options available on the mirror command. Note that we use continue-on-error to indicate that the command should try to mirror as much as possible and continue on errors.

        oc image mirror --help
        
        Note: Sometimes based on the number and size of images to be mirrored, the oc image mirror might take longer. If you are issuing the command on a remote machine it is recommended that you run the command in the background with a nohup so even if network connection to your remote machine is lost or you close the terminal the mirroring will continue. For example, the below command will start the mirroring process in background and write the log to my-mirror-progress.txt.
        nohup oc image mirror \
        -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \
        -a $REGISTRY_AUTH_FILE \
        --filter-by-os '.*' \
        --insecure \
        --skip-multiple-scopes \
        --max-per-registry=1 \
        --continue-on-error=true > my-mirror-progress.txt  2>&1 &
        
        You can view the progress of the mirror by issuing the following command on the remote machine:
        tail -f my-mirror-progress.txt
        
    • For mirroring from a file system (disconnected mirroring):

      Mirror images to your file system:
       export IMAGE_PATH=<image-path>
       oc image mirror \
         -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-filesystem.txt \
         --filter-by-os '.*'  \
         -a $REGISTRY_AUTH_FILE \
         --insecure  \
         --skip-multiple-scopes \
         --max-per-registry=1 \
         --continue-on-error=true \
         --dir "$IMAGE_PATH"
      

      The <image-path> refers to the local path to store the images. For example, in the previous section if provided file://local as input during generate mirror-manifests, then the preceding command will create a subdirectory v2/local inside directory referred by <image-path> and copy the images under it.

    The following command can be used to see all the options available on the mirror command. Note that continue-on-error is used to indicate that the command should try to mirror as much as possible and continue on errors.

    oc image mirror --help
    
    Note: Sometimes based on the number and size of images to be mirrored, the oc image mirror might take longer. If you are issuing the command on a remote machine, it is recommended that you run the command in the background with nohup so that even if you lose network connection to your remote machine or you close the terminal, the mirroring will continue. For example, the following command will start the mirroring process in the background and write the log to my-mirror-progress.txt.
     export IMAGE_PATH=<image-path>
     nohup oc image mirror \
       -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-filesystem.txt \
       --filter-by-os '.*' \
       -a $REGISTRY_AUTH_FILE \
       --insecure \
       --skip-multiple-scopes \
       --max-per-registry=1 \
       --continue-on-error=true \
       --dir "$IMAGE_PATH" > my-mirror-progress.txt  2>&1 &
    

    You can view the progress of the mirror by issuing the following command on the remote machine:

    tail -f my-mirror-progress.txt
    
  2. For disconnected mirroring only: Continue to move the following items to your file system:

    • The <image-path> directory you specified in the previous step
    • The auth file referred by $REGISTRY_AUTH_FILE
    • ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-filesystem.txt
  3. For disconnected mirroring only: Mirror images to the target registry from file system

    Complete the steps in this section on your file system to copy the images from the file system to the $TARGET_REGISTRY. Your file system must be connected to the target docker registry.

    Important: If you used the placeholder value of TARGET_REGISTRY as a parameter to --final-registry at the time of generating mirror manifests, then before running the following command, find and replace the placeholder value of TARGET_REGISTRY in the file, images-mapping-from-filesystem.txt, with the actual registry where you want to mirror the images. For example, if you want to mirror images to myregistry.com/mynamespace then replace TARGET_REGISTRY with myregistry.com/mynamespace.
    1. Run the following command to copy the images (referred in the images-mapping-from-filesystem.txt file) from the directory referred by <image-path> to the final target registry:
      export IMAGE_PATH=<image-path>
      oc image mirror \
        -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-filesystem.txt \
        -a $REGISTRY_AUTH_FILE \
        --from-dir "$IMAGE_PATH" \
        --filter-by-os '.*' \
        --insecure \
        --skip-multiple-scopes \
        --max-per-registry=1 \
        --continue-on-error=true
Configure the cluster
  1. Update the global image pull secret for your Red Hat OpenShift cluster. Follow the steps in Updating the global cluster pull secret.

    The documented steps in the link enable your cluster to have proper authentication credentials in place to pull images from your TARGET_REGISTRY as specified in the image-content-source-policy.yaml which you will apply to your cluster in the next step.

  2. Create ImageContentSourcePolicy

    Important:
    • Before you run the command in this step, you must be logged into your OpenShift cluster. Using the oc login command, log in to the Red Hat OpenShift Container Platform cluster where your final location resides. You can identify your specific oc login by clicking the user drop-down menu in the Red Hat OpenShift Container Platform console, then clicking Copy Login Command.

      • If you used the placeholder value of TARGET_REGISTRY as a parameter to --final-registry at the time of generating mirror manifests, then before running the following command, find and replace the placeholder value of TARGET_REGISTRY in file, ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yaml with the actual registry where you want to mirror the images. For example, replace TARGET_REGISTRY with myregistry.com/mynamespace.

    Run the following command to create ImageContentSourcePolicy:

       oc apply -f  ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yaml
    

    If you are using Red Hat OpenShift Container Platform version 4.7 or earlier, this step might cause your cluster nodes to drain and restart sequentially to apply the configuration changes.

  3. Verify that the ImageContentSourcePolicy resource is created.

    oc get imageContentSourcePolicy
    
  4. Verify your cluster node status and wait for all the nodes to be restarted before proceeding.

    oc get MachineConfigPool
    
    $ oc get MachineConfigPool -w
    NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
    master   rendered-master-53bda7041038b8007b038c08014626dc   True      False      False      3              3                   3                     0                      10d
    worker   rendered-worker-b54afa4063414a9038958c766e8109f7   True      False      False      3              3                   3                     0                      10d
    

    After the ImageContentsourcePolicy and global image pull secret are applied, the configuration of your nodes will be updated sequentially. Wait until all MachineConfigPools are in the UPDATED=True status before proceeding.

  5. Go to the project where deployment has to be done:

    Note: You must be logged into a cluster before performing the following steps.
    export NAMESPACE=<YOUR_NAMESPACE>
    
    oc new-project $NAMESPACE
    
  6. Optional: If you use an insecure registry, you must add the target registry to the cluster insecureRegistries list.

    oc patch image.config.openshift.io/cluster --type=merge \
    -p '{"spec":{"registrySources":{"insecureRegistries":["'${TARGET_REGISTRY}'"]}}}'
    
  7. Verify your cluster node status and wait for all the nodes to be restarted before proceeding.

    oc get MachineConfigPool -w
    

    After the ImageContentsourcePolicy and global image pull secret are applied, the configuration of your nodes will be updated sequentially. Wait until all MachineConfigPools are updated.

    At this point your cluster is ready for IBM Connect:Direct for UNIX deployment. The helm chart is present in ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION/charts/ibm-connect-direct-1.2.x.tgz directory. Use it for deployment. Copy it in current directory.

    cp ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION/charts/ibm-connect-direct-1.2.x.tgz .
    Note: Replace with version information in above command.
  8. Configuration required in Helm chart: To use the image mirroring in OpenShift cluster, helm chart should be configured to use the digest value for referring to container image. Set image.digest.enabled to true in values.yaml file or pass this parameter using Helm CLI.
Setting up a repeatable mirroring process

Once you complete a CASE save, you can mirror the CASE as many times as you want to. This approach allows you to mirror a specific version of the IBM Cloud Pak into development, test, and production stages using a private container registry.

Follow the steps in this section if you want to save the CASE to multiple registries (per environment) once and be able to run the CASE in the future without repeating the CASE save process.

  1. Run the following command to save the CASE to ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION which can be used as an input during the mirror manifest generation:
    oc ibm-pak get \
    $CASE_NAME \
    --version $CASE_VERSION
    
  2. Run the oc ibm-pak generate mirror-manifests command to generate the image-mapping.txt:
    oc ibm-pak generate mirror-manifests \
    $CASE_NAME \
    $TARGET_REGISTRY \
    --version $CASE_VERSION
    
    Then add the image-mapping.txt to the oc image mirror command:
    oc image mirror \
      -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \
      --filter-by-os '.*'  \
      -a $REGISTRY_AUTH_FILE \
      --insecure  \
      --skip-multiple-scopes \
      --max-per-registry=1 \
      --continue-on-error=true
    

If you want to make this repeatable across environments, you can reuse the same saved CASE cache (~/.ibm-pak/$CASE_NAME/$CASE_VERSION) instead of executing a CASE save again in other environments. You do not have to worry about updated versions of dependencies being brought into the saved cache.

Image Signature Verification

To verify that only IBM-signed images are pulled into the cluster’s registry, set up image signature verification. This step is optional.

Prerequisites

  1. Access to Entitled Registry (ER)
  2. The skopeo package is installed. Refer to this https://github.com/containers/skopeo for installation guidance.
  3. Access to the destination image repository where images will be pulled.
  4. Download and Extract Key and Certificates:
    1. Download the ZIP file and extract its contents. To download the ZIP file, click here.
    2. Extracted Files:
      connectdirectkey.pub.gpg – Public key for verifying the image signature
      connectdirectkey.pem.cer and connectdirectkey.pem.chain – Certificate files to validate the chain of trust used for signing the container image

Verifying the Image

Follow the below steps to verify the image:
  1. Automatic Signature Verification:
    1. Policy Configuration: Create or update the policy file located at /etc/containers/policy.json and set it according to the configuration shared in (YAML format).
      {
          "default": [
              {
                  "type": "reject"
              }
          ],
          "transports": {
              "docker": {
                  "cp.icr.io/cp/ibm-connectdirect": [
                    {
                         "type": "signedBy",
                         "keyType": "GPGKeys",
                         "keyPath": "/path/to/connectdirectkey.pub.gpg"
                    }
                  ]
              }
          }  
      }
      Note: For unsigned images, set "type":"insecureAcceptAnything" for IBM Production Entitled Registry in the /etc/containers/policy.json file.
    2. Pull the Image: Use the following command to pull images to your local registry:
      skopeo copy docker://cp.icr.io/cp/ibm-connectdirect/<imagename>:<tag> docker://<local_repository>:<tag> --src-creds iamapikey:key --dest-creds username:password
  2. Manual Signature Verification:
    1. Copy the Public Key to the local filesystem for verification – connectdirectkey.pub.gpg.
    2. Import the Public Key to the GPG keystore:
      sudo gpg --import path/to/connectdirectkey.pub.gpg
    3. Get the fingerprint for the imported key
      sudo gpg -k

      or

      export FINGERPRINT=$(gpg --fingerprint --with-colons | grep fpr | tr -d 'fpr:')
      
    4. Pull the Image Locally:
      skopeo copy docker://cp.icr.io/cp/ibm-connectdirect/<imagename>:<tag> dir:<imagedir> --src-creds="iamapikey:key"
    5. Verify the Signature:
      skopeo standalone-verify <imagedir>/manifest.json <local_image_reference/repo:tag> <gpgkeyfingerprint> <imagedir>/signature
      
    6. Expected Result:
      Signature verified, digest sha256:<xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  3. Certificate Verification
    • Compare the Certificate and Public Key:

      • Display the certificate details:
        openssl x509 -text -in connectdirectkey.pem.cer
        ##shows the certificate details, e.g. it is signed by IBM and Digicert
      • Display public key details:
        gpg -v --list-packets connectdirectkey.pub.gpg
        #shows the public key details

      Modulus and Exponent Verification:

      Certificate Modulus:
      00:cf:61:02:67:1b:90:09:34:0b:be:f8:8b:16:2f: 
      5a:73:57:ab:02:a2:42:a3:05:ee:9b:ec:40:8c:b7: 
      Exponent: 65537 (0x10001)
      Public key:
      ... 
      pkey[0]: CF6102671B9009340BBEF88B162F5A7357AB02A242A305EE9B 
      pkey[1]: 010001
      • Ensure that the public key modulus and exponent match the certificate’s details.
    • Certificate Validity Check:
      openssl ocsp -no_nonce -issuer CertKeyAlias.pem.chain -cert CertKeyAlias.pem.cer -VAfile CertKeyAlias.pem.chain -text -url http://ocsp.digicert.com -respout ocsptest #check if the cert is still valid
    Note: The certificate is refreshed every two years.

Applying Pod Security Standard Kubernetes Cluster

Pod Security Standard should be applied to the namespace if Kubernetes cluster v1.25 and above is used. This helm chart has been certified with baseline security standards with enforce security level. For more details, refer to Pod Security Standards.

Creating security context constraints for Red Hat OpenShift Cluster

  • The IBM Connect:Direct for Unix chart requires an SecurityContextConstraints (SCC) to be tied to the target namespace prior to deployment.
    Based on your organization security policy, you may need to decide the security context constraints for your OpenShift cluster. This chart has been verified on privileged SCC which comes with Redhat OpenShift. For more info, please refer to this link.
    IBM CCS requires a custom SCC which is the minimum set of permissions/capabilities needed to deploy this helm chart and the Connect Direct for Unix services to function properly. It is based on the predefined restricted SCC with extra required privileges. This is the recommended SCC for this chart and it can be created by the cluster administrator. The cluster administrator can either use the snippets given below or the scripts provided in the Helm chart to create the SCC, cluster role and tie it to the project where deployment will be performed. In both the cases, same SCC and cluster role will be created. It is recommended to use the scripts in the Helm chart so that required SCC and cluster role is created without any issue.
    Attention: If Standard User Mode feature is enabled, PSP will be slightly different. For more information, look for the SCC below.
  • Below is the Custom SecurityContextConstraints snippet for Connect Direct for Unix operating in Standard User Mode. Fore more information, refer to Standard User Mode in IBM Connect:Direct for Unix Containers.
    apiVersion: security.openshift.io/v1
    kind: SecurityContextConstraints
    metadata:
      name: ibm-connect-direct-scc
      labels:  
        app: "ibm-connect-direct-scc"
    allowHostDirVolumePlugin: false
    allowHostIPC: false
    allowHostNetwork: false
    allowHostPID: false
    allowHostPorts: false
    allowPrivilegedContainer: false
    allowPrivilegeEscalation: true
    allowedCapabilities:
    - SETUID
    - SETGID
    - DAC_OVERRIDE
    - AUDIT_WRITE
    defaultAddCapabilities: []
    defaultAllowPrivilegeEscalation: false
    forbiddenSysctls:
    - "*"
    fsGroup:  
      type: MustRunAs
      ranges:
      - min: 1
      max: 4294967294
      readOnlyRootFilesystem: false
    requiredDropCapabilities:
    - ALL
    runAsUser:
      type: MustRunAsNonRoot
      seLinuxContext:
      type: MustRunAs
      supplementalGroups:
      type: MustRunAs
    ranges:
      - min: 1
      max: 4294967294
    volumes:
    - configMap
    - downwardAPI
    - emptyDir
    - nfs
    - persistentVolumeClaim
    - projected
    - secret
    priority: 0
  • Below is the Custom SecurityContextConstraints snippet for Connect Direct for Unix operating in Super User Mode.
    apiVersion: security.openshift.io/v1
    kind: SecurityContextConstraints
    metadata:
      name: ibm-connect-direct-scc
      labels:
        app: "ibm-connect-direct-scc"
    allowHostDirVolumePlugin: false
    allowHostIPC: false
    allowHostNetwork: false
    allowHostPID: false
    allowHostPorts: false
    allowPrivilegedContainer: false
    allowPrivilegeEscalation: true
    allowedCapabilities:
    - FOWNER
    - SETUID
    - SETGID
    - DAC_OVERRIDE
    - CHOWN
    - SYS_CHROOT
    - AUDIT_WRITE
    defaultAddCapabilities: []
    defaultAllowPrivilegeEscalation: false
    forbiddenSysctls:
    - "*"
    fsGroup:
      type: MustRunAs
      ranges:
      - min: 1
        max: 4294967294
    readOnlyRootFilesystem: false
    requiredDropCapabilities:
    - ALL
    runAsUser:
      type: MustRunAsNonRoot
    seLinuxContext:
      type: MustRunAs
    supplementalGroups:
      type: MustRunAs
      ranges:
      - min: 1
        max: 4294967294
    volumes:
    - configMap
    - downwardAPI
    - emptyDir
    - nfs
    - persistentVolumeClaim
    - projected
    - secret
       priority: 0
  • Custom ClusterRole for the custom SecurityContextConstraints
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: "ibm-connect-direct-scc"
      labels:
        app: "ibm-connect-direct-scc"
    rules:
    - apiGroups:
      - security.openshift.io
      resourceNames:
      - ibm-connect-direct-scc
      resources:
      - securitycontextconstraints
      verbs:
      - use
  • From the command line, you can run the setup scripts included in the Helm chart (untar the downloaded Helm chart archive).
    ibm-connect-direct/ibm_cloud_pak/pak_extensions/pre-install/clusterAdministration/createSecurityClusterPrereqs.sh
    <pass 0 or 1 to disable/enable OUM feature>
    ibm-connect-direct/ibm_cloud_pak/pak_extensions/pre-install/namespaceAdministration/createSecurityNamespacePrereqs.sh <Project name where deployment will be perfromed>
    Note: If the above scripts are not executable, you will need to make the scripts executable by executing following commands:
    chmod u+x ibm-connect-direct/ibm_cloud_pak/pak_extensions/pre-install/namespaceAdministration/
    createSecurityNamespacePrereqs.sh
    chmod u+x ibm-connect-direct/ibm_cloud_pak/pak_extensions/pre-install/clusterAdministration/
    createSecurityClusterPrereqs.sh

Creating storage for Data Persistence

The containers are ephemeral entity, all the data inside the container will be lost when the containers are destroyed/removed, so data must be saved to Storage Volume using Persistent Volume. Persistent volume is recommended for Connect:Direct® for UNIX storing application data files. A Persistent Volume (PV) is a piece of storage in the cluster that is provisioned by an administrator or dynamic provisioner using storage classes. For more information see:
IBM Certified Container Software for CDU supports:
  • Dynamic Provisioning using storage classes
  • Pre-created Persistent Volume
  • Pre-created Persistent Volume Claim
  • The only supported access mode is `ReadWriteOnce`

Dynamic Provisioning

Dynamic provisioning is supported using storage classes. To enable dynamic provisioning use following configuration for helm chart:
  • persistence.useDynamicProvisioning- It must be set to true. By default, it is set to false, which means dynamic provisioning is disabled.
  • pvClaim.storageClassName- The storage class is blank by default. Update this parameter value using valid storage class. Consult your cluster administrator for available storage class as required by this chart.
  • secret.certSecretName- Specify the certificate secret required for Secure plus configuration or LDAP support. Update this parameter with valid certificate secret. Refer Creating secret for more information.

Invoke the following command to create the Storage Class:

kubectl apply -f <StorageClass yaml file>

Storage Class Azure Kubernetes Cluster

In Azure, we support Disk Persistent Volumes for dynamic provisioning. The default storage class used for deployment is managed-premium.

Disk Storage Class in Azure typically refers to block storage, such as Azure Managed Disks. This storage type is persistent and commonly attached to a single node, meaning it is not generally shared across multiple nodes or instances.

Node Affinity for Single-Node Scheduling: To enable disk sharing by scheduling all pods on a single node, configure Node Affinity in the deployment.

Setting Node Affinity: Update the affinity section in the values.yaml file as shown below:
affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - aks-agentpool-15605297-vmss00000r

Explanation of matchExpressions: The matchExpressions section defines the node label requirements for pod scheduling. The key corresponds to the label's key, while values includes the allowed values for that key. In this example, the pod is scheduled to the node with the label kubernetes.io/hostname and the value aks-agentpool-15605297-vmss00000r.

You can create a Storage Class to support dynamic provisioning. Refer to the YAML template below for creating a Storage Class in an Azure Kubernetes Cluster and customize it as per your requirements.

kind: StorageClass

apiVersion: storage.k8s.io/v1

metadata:

  name: azurefile-sc-fips

provisioner: file.csi.azure.com

reclaimPolicy: Delete

volumeBindingMode: Immediate

allowVolumeExpansion: true

parameters:

  skuName: Premium_LRS

  protocol: nfs

This Storage Class uses the provisioner file.csi.azure.com, with skuName set to Premium_LRS and protocol set to nfs.

Note: The SKU is set to Premium_LRS in the YAML file because Premium SKU is required for NFS. For more information, see Storage class parameters for dynamic PersistentVolumes.

Non-Dynamic Provisioning

Non-Dynamic Provisioning is supported using pre-created Persistent Volume and pre-created Persistent Volume Claim. The Storage Volume should have Connect:Direct for UNIX secure plus certificate files to be used for installation. Create a directory named "CDFILES" inside mount path and place certificate files in the created directory. Similarly, the LDAP certificates should be placed in same directory.

Using pre-created Persistent Volume- When creating Persistent Volume, make a note of the storage class and metadata labels, that are required to configure Persistent Volume Claim's storage class and label selector during deployment. This ensures that the claims are bound to Persistent Volume based on label match. These labels can be passed to helm chart either by --set flag or custom values.yaml file. The parameters defined invalues.yaml for label name and its value are pvClaim.selector.label and pvClaim.selector.value respectively.

Refer below yaml templates for Persistent Volume creation. Customize as per your requirement. Example: Create Persistent volume using NFS server
kind: PersistentVolume
apiVersion: v1
metadata:
  name: <persistent volume name> 
  labels:
    app.kubernetes.io/name: <persistent volume name>
    app.kubernetes.io/instance: <release name>
    app.kubernetes.io/managed-by: <service name>
    helm.sh/chart: <chart name>
    release: <release name>
    purpose: cdconfig
spec:
  storageClassName: <storage classname>
  capacity:
    storage: <storage size>
  accessModes:
    - ReadWriteOnce
  nfs:
    server: <NFS server IP address>
    path: <mount path>
Invoke the following command to create a Persistent Volume:
Kubernetes :
kubectl create -f <peristentVolume yaml file>
OpenShift:
oc create -f <peristentVolume yaml file>

Using pre-created Persistent Volume Claim (PVC)- The existing PVC can also be used for deployment. The PV for PVC should have the certificate files as required for Connect:Direct for UNIX secure plus or LDAP TLS configuration. The parameter for pre-created PVC is pvClaim.existingClaimName. One should pass a valid PVC name to this parameter else deployment would fail.

Apart from required Persistent Volume, you can bind extra storage mounts using the parameters provided in values.yaml. These parameters are extraVolume and extraVolumeMounts. This can be a host path or a NFS type.

The deployment mounts following configuration/resource directories on the Persistent Volume -
  • <install_dir>/work
  • <install_dir>/ndm/security
  • <install_dir>/ndm/cfg
  • <install_dir>/ndm/secure+
  • <install_dir>/process
  • <install_dir>/file_agent/config
  • <install_dir>/file_agent/log
When deployment is upgraded or pod is recreated in Kubernetes based cluster then, only the data of above directories are saved/persisted on Persistent Volume.

Setting permission on storage

When shared storage is mounted on a container, it is mounted with same POSIX ownership and permission present on exported NFS directory. The mounted directories on container may not have correct owner and permission needed to perform execution of scripts/binaries or writing to them. This situation can be handled as below -
  • Option A: The easiest and undesirable solution is to have open permissions on the NFS exported directories.
     chmod -R 777 <path-to-directory>
  • Option B: Alternatively, the permissions can be controlled at group level leveraging the supplementalGroups and fsGroup setting. For example - if we want to add GID to supplementalGroups or fsGroup, it can be done using storageSecurity.supplementalGroups or storageSecurity.fsGroup.
Apart from above recommendation, during deployment, a default Connect:Direct admin user cdadmin with group cdadmin is created. The default UID and GID of cdadmin is 45678. A non-admin Connect:Direct user is also created as appuser with default UID and GID set to 45679.

Root Squash NFS support

Root squash NFS is secure NFS share when root privileges are shrinked similar to unprivileged user. Also, this user is mapped to nfsnobody or nobody user on the system. So, you cannot perform operations like changing the ownership of any files/directories.

Connect:Direct for UNIX
helm chart can be deployed on root squash NFS. Since, the ownership of files/directories mounted in container would be mounted as nfsnobody or nobody. The POSIX group ID of the root squash NFS share should be added to Supplemental Group list statefulset using storageSecurity.supplementalGroup in values.yaml file. Similarly, if extra NFS share is mounted then proper read/write permission can be provide to container user using supplemental groups only.

Creating secret

Passwords are used for KeyStore, by Administrator to connect to Connect:Direct server, and to decrypt certificates files.

To separate application secrets from the Helm Release, a Kubernetes secret must be created based on the examples given below and be referenced in the Helm chart as secret.secretName value.

To create Secrets using the command line, follow the steps below:
  1. Create a template file with Secret defined as described in the example below:
    apiVersion: v1
    kind: Secret
    metadata:
      name: <secret name>
    type: Opaque
    data:
      admPwd: <base64 encoded password>
      crtPwd: <base64 encoded password>
      keyPwd: <base64 encoded password>
      appUserPwd: <base64 encoded password>
    Here:
    • admPwd refers to the password that will be set for the Admin user 'cdadmin' after a successful deployment.
    • crtPwd refers to the passphrase of the identity certificate file passed in cdArgs.crtName for secure plus configuration.
    • keyPwd refers to the Key Store password.
    • appUserPwd refers to password for a non-admin Connect:Direct user. The password for this user is mandatory for IBM® Connect:Direct for UNIX operating in Ordinary User Mode.
    • After the secret is created, delete the yaml file for security reasons.
    Note: Base64 encoded passwords need to be generated manually by invoking the below command:
    echo -n “<your desired password>” | base64
    Use the output of this command in the <secret yaml file>.
  2. Run the following command to create the Secret:
    Kubernetes:
    kubectl create -f <secret yaml file>
    OpenShift:
    oc create -f <secret yaml file>
    To check the secret created invoke the following command:
    kubectl get secrets

    For more details see, Secrets.

    Default Kubernetes secrets management has certain security risks as documented here, Kubernetes Security.

    Users should evaluate Kubernetes secrets management based on their enterprise policy requirements and should take steps to harden security.

  3. For dynamic provisioning, one more secret resource needs to be created for all certificates (secure plus certificates and LDAP certificates). It can be created using below example as required:
    Kubernetes:
    kubectl create secret generic cd-cert-secret --from-file=certificate_file1=/path/to/certificate_file1 --from-file=certificate_file2=/path/to/certificate_file2
    OpenShift:
    oc create secret generic cd-cert-secret --from-file=certificate_file1=/path/to/certificate_file1 --from-file=certificate_file2=/path/to/certificate_file2
    Note:
    • The secret resource name created above. It should be referenced by Helm chart for dynamic provisioning using parameter `secret.certSecretName'.
    • For the K8s secret object creation, ensure that the certificate files being used contain the identity certificate. Configure the parameter cdArgs.crtName with the certificate file having the appropriate file extension that corresponds to the identity certificate.

Configuring- Understanding values.yaml

The following table describes configuration parameters listed in the values.yaml file in Helm charts used to complete the installation.
Table 3. Configuration Parameters listed in values.yaml file
Parameter Description Default Value
licenseType Specify prod or non-prod for production or non-production license type respectively prod
license License agreement. Set true to accept the license. false
env.extraEnvs Specify extra environment variable if needed  
env.timezone Timezone UTC
arch Node Architecture amd64
replicaCount Number of deployment replicas 1
image.repository Image full name including repository  
image.tag Image tag  
digest.enabled Enable/Disable digest of image to be used false
digest.value The digest value for the image  
image.imageSecrets Image pull secrets  
image.pullPolicy Image pull policy Always
upgradeCompCheck This parameter is intended to acknowledge a change in the system username within the container. Acknowledging this change is crucial before proceeding with the upgrade. false
cdArgs.nodeName Node name cdnode
cdArgs.crtName Certificate file name  
cdArgs.localCertLabel Specify certificate import label in keystore Client-API
cdArgs.cport Client Port 1363
cdArgs.sport Server Port 1364
saclConfig Configuration for SACL n
cdArgs.configDir Directory for storing Connect:Direct configuration files CDFILES
cdArgs.trustedAddr   []
cdArgs.keys.server   * MRLN SIMP Cd4Unix/Cd4Unix
cdArgs.keys.client   * MRLN SIMP Cd4Unix/Cd4Unix
oum.enabled Enable/Disable Ordinary User Mode feature y

storageSecurity.fsGroup

Group ID for File System Group 45678
storageSecurity.supplementalGroups Group ID for Supplemental group 65534
persistence.enabled To use persistent volume true
pvClaim.existingClaimName Provide name of existing PV claim to be used  
persistence.useDynamicProvisioning To use storage classes to dynamically create PV false
pvClaim.accessMode Access mode for PV Claim ReadWriteOnce
pvClaim.storageClassName Storage class of the PVC  
pvClaim.selector.label PV label key to bind this PVC  
pvClaim.selector.value PV label value to bind this PVC  
pvClaim.size Size of PVC volume 100Mi
service.type Kubernetes service type exposing ports LoadBalancer
service.apiport.name API port name api
service.apiport.port API port number 1363
service.apiport.protocol Protocol for service TCP
service.ftport.name Server (File Transfer) Port name ft
service.ftport.port Server (File Transfer) Port number 1364
service.ftport.protocol Protocol for service TCP
service.loadBalancerIP Provide the LoadBalancer IP  
service.loadBalancerSourceRanges Provide Load Balancer Source IP ranges []
service.annotations Provide the annotations for service {}
service.externalTrafficPolicy Specify if external Traffic policy is needed  
service.sessionAffinity Specify session affinity type ClientIP
service.externalIP External IP for service discovery []
networkPolicyIngress.enabled Enable/Disable the ingress policy true
networkPolicyIngress.from Provide from specification for network policy for ingress traffic []
networkPolicyEgress. enabled Enable/Disable egress policy true
networkPolicyEgress.acceptNetPolChange This parameter is to acknowledge the Egress network policy introduction false
secret.certSecretName Name of secret resource of certificate files for dynamic provisioning  
secret.secretName Secret name for Connect:Direct password store  
resources.limits.cpu Container CPU limit 500mi
resources.limits.memory Container memory limit 2000Mi
resources.limits.ephemeral-storage Specify ephemeral storage limit size for pod's container "5Gi"
resources.requests.cpu Container CPU requested 500m
resources.requests.memory Container Memory requested 2000Mi
resources.requests.ephemeral-storage Specify ephemeral storage request size for pod's container "3Gi"
serviceAccount.create Enable/disable service account creation true
serviceAccount.name Name of Service Account to use for container  
extraVolumeMounts Extra Volume mounts  
extraVolume Extra volumes  
affinity.nodeAffinity.required

DuringSchedulingIgnoredDuring

Execution

k8sPodSpec.nodeAffinity.required

DuringSchedulingIgnoredDuring

Execution

 
affinity.nodeAffinity.preferred

DuringSchedulingIgnoredDuring

Execution

k8sPodSpec.nodeAffinity.preferred

DuringSchedulingIgnoredDuring

Execution

 
affinity.podAffinity.required

DuringSchedulingIgnoredDuring

Execution

k8s PodSpec.podAntiAffinity.

requiredDuringSchedulingIgnored

DuringExecution

 
affinity.podAffinity.preferred

DuringSchedulingIgnoredDuring

Execution

k8sPodSpec.podAntiAffinity.

preferredDuringScheduling

IgnoredDuringExecution

 
affinity.podAntiAffinity.required

DuringSchedulingIgnoredDuring

Execution

k8sPodSpec.podAntiAffinity.

requiredDuringSchedulingIgnored

DuringExecution

 
affinity.podAntiAffinity.preferred

DuringSchedulingIgnoredDuring

Execution

k8sPodSpec.podAntiAffinity.

preferredDuringSchedulingIgnored

DuringExecution

 
startupProbe.initialDelaySeconds Initial delay for startup probe 0
startupProbe.timeoutSeconds Timeout for startup probe 5
startupProbe.periodSeconds Time period between startup probes 5
livenessProbe.initialDelaySeconds Initial delay for liveness 10
livenessProbe.timeoutSeconds Timeout for liveness 5
livenessProbe.periodSeconds Time period for liveness 10
readinessProbe.initialDelaySeconds Initial delays for readiness 5
readinessProbe.timeoutSeconds Timeout for readiness 5
readinessProbe.periodSeconds Time period for readiness 10
route.enabled Route for OpenShift Enabled/Disabled false
ldap.enabled Enable/Disable LDAP configuration false
ldap.host LDAP server host  
ldap.port LDAP port  
ldap.domain LDAP Domain  
ldap.tls Enable/Disable LDAP TLS false
ldap.startTls Specify true/false for ldap_id_use_start_tls true
ldap.caCert LDAP CA Certificate name  
ldap.tlsReqcert Specify valid value - never, allow, try, demand, hard never
ldap.defaultBindDn Specify bind DN  
ldap.defaultAuthtokType Specify type of the authentication token of the default bind DN  
ldap.defaultAuthtok Specify authentication token of the default bind DN. Only clear text passwords are currently supported  
ldap.clientValidation Enable/Disable LDAP Client Validation false
ldap.clientCert LDAP Client Certificate name  
ldap.clientKey LDAP Client Certificate key name  
extraLabels Provide extra labels for all resources of this chart {}
cdfa.fileAgentEnable Specify y/n to Enable/Disable File Agent n
hpa.enabled Enables or disables Horizontal Pod Autoscaling (HPA) true
hpa.minReplicas Defines the minimum number of replicas that must be available at any time for the deployment. 1
hpa.maxReplicas Specifies the maximum number of replicas to which the deployment can scale up. 5
hpa.averageCpuUtilization Defines the target threshold for average CPU utilization (in percentage) that triggers scaling actions by the Horizontal Pod Autoscaler (HPA). 60
hpa.averageMemoryUtilization Defines the target threshold for average memory utilization (in percentage) that triggers scaling actions by the Horizontal Pod Autoscaler (HPA). 60
hpa.stabilizationWindowSeconds Specifies the wait period (in seconds) after a scaling action before the system evaluates and initiates another scaling event. 180
hpa.periodSeconds Defines the interval (in seconds) at which the Horizontal Pod Autoscaler (HPA) gathers metrics to assess if scaling actions are required. 15
pdb.enabled Enables or disables the Pod Disruption Budget (PDB). true
pdb.minAvailable Defines the minimum number of pods required to stay operational during voluntary disruptions to ensure availability. 1
Use the following steps to complete the installation:
To override configuration parameters during Helm installation, you can choose one of the following methods:

Method 1: Override Parameters Directly with CLI Using --set

This approach uses the --set argument to specify each parameter that needs to be overridden at the time of installation.

Example for Helm Version 2:

helm install --name <release-name> \
--set cdArgs.cport=9898 \
... 
ibm-connect-direct-1.4.x.tgz

Example for Helm Version 3:

helm install <release-name> \
--set cdArgs.cport=9898 \
... 
ibm-connect-direct-1.4.x.tgz

Method 2: Use a YAML File with Configured Parameters

Alternatively, specify configurable parameters in a values.yaml file and use it during installation. This approach can be helpful for managing multiple configurations in one place.

  • To obtain the values.yaml template from the Helm chart:

    • For Online Cluster:

      helm inspect values ibm-helm/ibm-connect-direct > my-values.yaml
      
    • For Offline Cluster:

      helm inspect values <path-to-ibm-connect-direct-helm-chart> > my-values.yaml
      
  • Edit the my-values.yaml file to include your desired configuration values and use it with the Helm installation command:

Example for Helm Version 2:

helm install --name <release-name> -f my-values.yaml ...  ibm-connect-direct-1.4.x.tgz

Example for Helm Version 3:

helm install <release-name> -f my-values.yaml ...  ibm-connect-direct-1.4.x.tgz
To mount extra volumes, use one of the following methods:

Method 1: YAML Configuration

For HostPath Configuration

extraVolumeMounts:-name:<name>mountPath:<pathinsidecontainer>extraVolume:-name:<samenameasinextraVolumeMounts>hostPath:path:<pathonhostmachine>type:DirectoryOrCreate
For NFS Server Configuration
extraVolumeMounts:-name:<name>mountPath:<pathinsidecontainer>extraVolume:-name:<samenameasinextraVolumeMounts>nfs:path:<NFSdatapath>server:<serverIP>

Method 2: Using --set Flag in CLI

For HostPath

helm install --name <release-name> \
  --set extraVolume[0].name=<name>,extraVolume[0].hostPath.path=<path on host machine>,extraVolume[0].hostPath.type="DirectoryOrCreate",extraVolumeMounts[0].name=<same name as in extraVolume>,extraVolumeMounts[0].mountPath=<path inside container> \
  ...  ibm-connect-direct-1.4.x.tgz

For NFS Server

helm install --name <release-name> \
  --set extraVolume[0].name=<name>,extraVolume[0].nfs.path=<NFS data path>,extraVolume[0].nfs.server=<NFS server IP>,extraVolumeMounts[0].name=<same name as in extraVolume>,extraVolumeMounts[0].mountPath=<path inside container> \
  ...  ibm-connect-direct-1.4.x.tgz

If extra volumes are mounted, ensure the container user (cdadmin/appuser) has appropriate read/write permissions. For instance, if an extra NFS share has a POSIX group ID of 3535, add this group ID as a supplemental group during deployment to ensure the container user is a member of this group.

Affinity

The chart provides ways in form of node affinity, pod affinity and pod anti-affinity to configure advance pod scheduling in Kubernetes. See, Kubernetes documentation for details.

Note: For exact parameters, its value and its description, please refer to values.yaml file present in the helm chart itself. Untar the helm chart package to see this file inside chart directory.

Understanding LDAP deployment parameters

This section demonstrates the implementation of the PAM and SSSD configuration with Connect:Direct UNIX to authenticate external user accounts through OpenLDAP.
  • When the LDAP authentication is enabled, the container startup script automatically updates the initparam configuration to support the PAM module. The following line is added to initparam.cfg:
     ndm.pam:service=login:
  • The following default configuration file (/etc/sssd/sssd.conf) is added to the image.
    [domain/default]
    id_provider = ldap
    autofs_provider = ldap
    auth_provider = ldap
    chpass_provider = ldap
    ldap_uri = LDAP_PROTOCOL://LDAP_HOST:LDAP_PORT
    ldap_search_base = LDAP_DOMAIN
    ldap_id_use_start_tls = True
    ldap_tls_cacertdir = /etc/openldap/certs
    ldap_tls_cert = /etc/openldap/certs/LDAP_TLS_CERT_FILE
    ldap_tls_key = /etc/openldap/certs/LDAP_TLS_KEY_FILE
    cache_credentials = True
    ldap_tls_reqcert = allow
  • Description of the Certificates required for the configuration:
    • Mount certificates inside CDU Container:
      • Copy the certificates needed for LDAP configuration in the mapped directory which is used to share the Connect:Direct Unix secure plus certificates (CDFILES/cdcert directory by default).
    • DNS resolution: If TLS is enabled and hostname of LDAP server is passed as “ldap.host”, then it must be ensured that the hostname is resolved inside the container. It is the responsibility of Cluster Administrator to ensure DNS resolution inside pod's container.
    • Certificates creation and configuration: This section provides a sample way to generate the certificates:
      • LDAP_CACERT - The root and all the intermediate CA certificates needs to be copied in one file.
      • LDAP_CLIENT_CERT – The client certificate which the server must be able to validate.
      • LDAP_CLIENT_KEY – The client certificate key.
    • Use the below new parameters for LDAP configuration:
      • ldap.enabled
      • ldap.host
      • ldap.port
      • ldap.domain
      • ldap.tls
      • ldap.startTls
      • ldap.caCert
      • ldap.tlsReqcert
      • ldap.defaultBindDn
      • ldap.defaultAuthtokType
      • ldap.defaultAuthtok
      • ldap.clientValidation
      • ldap.clientCert
      • ldpa.clientKey
      Note:

      The IBM Connect:Direct for UNIX container uses sssd utility for communication with LDAP and the connection between sssd and LDAP server is required to be encrypted.

      TLS configuration is mandatory for user authentication which is required for file transfer using IBM Connect:Direct for UNIX.

Network Policy Change

Out of the box Network Policies

IBM Certified Container Software for Connect:Direct for UNIX comes with predefined network policies based on mandatory security guidelines. By default, all outbound communication is restricted, permitting only intra-cluster communication.

Out-of-the-box Egress Policies:
  1. Deny all Egress Traffic

  2. Allow Egress Traffic within the Cluster

Defining Custom Network Policy

During the deployment of the Helm chart, you have the flexibility to enable or disable network policies. If policies are enabled, a custom egress network policy is essential for communication outside the cluster. Since pods are confined to intra-cluster communication by default, the snippet below illustrates how to define an Egress Network Policy. This can serve as a reference during the Helm chart deployment:
networkPolicyEgress:

enabled: true

acceptNetPolChange: false

# write your custom egress policy here for to spec

to: []

#- namespaceSelector:

# matchLabels:

# name: my-label-to-match

# podSelector:

# matchLabels:

# app.kubernetes.io/name: "connectdirect"

#- podSelector:

# matchLabels:

# role: server

#- ipBlock:

# cidr: <IP Address>/<block size>

# except:

# - <IP Address>/<block size>

#ports:

#- protocol: TCP

# port: 1364

# endPort: 11364
Note:

In the latest release, a new Helm parameter, networkPolicyEgress.acceptNetPolChange, has been introduced. To proceed with the Helm chart upgrade, this parameter must be set to true. By default, it is set to false, and the upgrade won't proceed without this change.

Before this release, there was no Egress Network Policy. The new implementation might impact outbound traffic to external destinations. To mitigate this, a custom policy allowing external traffic needs to be created. Once this policy is in place, you can set the acceptNetPolChange parameter to true and proceed with the upgrade.

If you want to disable the network policy altogether, you can set networkPolicyEgress.enabled to false. Adjust these parameters based on your network and security requirements.

Refer to the Table 3 table containing the supported configurable parameters in the Helm chart.

Installing IBM Connect:Direct for Unix using Helm chart

After completing all cdu_pre_installation_tasks.html, you can deploy the IBM Certified Container Software for Connect:Direct for UNIX by invoking following command:
Helm version 2

helm install --name my-release --set license=true,image.repository=<reponame> image.tag=<image tag>,cdArgs.crtName=<certificate name>,image.imageSecrets=<image pull secret>,secret.secretName=<C:D secret name> ibm-connect-direct-1.4.x.tgz
or
helm install --name my-release ibm-connect-direct-1.4.x.tgz -f my-values.yaml
Helm version 3

helm install my-release --set license=true,image.repository=<reponame> image.tag=<image tag>,cdArgs.crtName=<certificate name>,image.imageSecrets=<image pull secret>,secret.secretName=<C:D secret name> ibm-connect-direct-1.4.x.tgz
or
helm install my-release ibm-connect-direct-1.4.x.tgz -f my-values.yaml

This command deploys ibm-connect-direct-1.4.x.tgz chart on the Kubernetes cluster using the default configuration. Creating storage for Data Persistence lists parameters that can be configured at deployment.

Mandatory parameters required at the helm install command:
Parameter Description Default Value
license License agreement for IBM Certified Container Software false
image.repository Image full name including repository  
image.tag Image tag  
cdArgs.crtName Key Certificate file name  
image.imageSecrets Image pull secrets  
secret.secretName Secret name for Connect:Direct password store  

Validating the Installation

After the deployment procedure is complete, you should validate the deployment to ensure that everything is working according to your needs. The deployment may take approximately 4-5 minutes to complete.

To validate if the Certified Container Software deployment using Helm charts is successful, invoke the following commands to verify the status (STATUS is DEPLOYED) for a Helm chart with release, my-release and namespace, my-namespace.
  • Check the Helm chart release status by invoking the following command and verify that the STATUS is DEPLOYED:
    helm status my-release
  • Wait for the pod to be ready. To verify the pods status (READY) use the dashboard or through the command line interface by invoking the following command:
    kubectl get pods -l release my-release -n my-namespace -o wide
  • To view the service and ports exposed to enable communication in a pod invoke the following command:
    kubectl get svc -l release= my-release -n my-namespace -o wide

    The screen output displays the external IP and exposed ports under EXTERNAL-IP and PORT(S) column respectively. If external LoadBalancer is not present, refer Master node IP as external IP.

Exposed Services

If required, this chart can create a service of ClusterIP for communication within the cluster. This type can be changed while installing chart using service.type key defined in values.yaml. There are two ports where IBM Connect:Direct processes run. API port (1363) and FT port (1364), whose values can be updated during chart installation using service.apiport.port or service.ftport.port.

IBM Connect:Direct for Unix services for API and file transfer can be accessed using LoadBalancer or external IP and mapped API and FT port. If external LoadBalancer is not present then refer to Master node IP for communication.
Note: NodePort service type is not recommended. It exposes additional security concerns and is hard to manage from both an application and networking infrastructure perspective.

DIME and DARE Security Considerations

This topic provides security recommendations for setting up Data In Motion Encryption (DIME) and Data At Rest Encryption (DARE). It is intended to help you create a secure implementation of the application.

  1. All sensitive application data at rest is stored in binary format so user cannot decrypt it. This chart does not support encryption of user data at rest by default. Administrator can configure storage encryption to encrypt all data at rest.
  2. Data in motion is encrypted using transport layer security (TLS 1.3). For more information see, Secure Plus.