Installing
After reviewing the system requirements and other planning information, you can proceed to install IBM® Sterling Connect:Direct for UNIX Container.
The following tasks represent the typical task flow for performing the installation:
Setting up your registry server
To install IBM Sterling Connect:Direct for UNIX Container, you must have a registry server where you can host the image required for installation.
Using the existing registry server
If you have an existing registry server, you can use it, provided that it is in close proximity to cluster where you will deploy Connect:Direct for UNIX Container. If your registry server is not in close proximity to your cluster, you might notice performance issues.
Also, before the installation, ensure that pull secrets are created in the
namespace/project and are linked to the service accounts. You will need to properly
manage these pull secrets. This pull secret can be updated in
values.yaml file image.imageSecrets.
Using Docker registry
Kubernetes does not provide a registry solution out of the based. However, you can create your own registry server and host your images. Please refer to the deployment of registry server.
Setting up Namespace or project
To install IBM Sterling Connect:Direct for UNIX Container, you must have an existing namespace or project, or create a new one if needed.
You can either use an existing namespace or create a new one in Kubernetes cluster. Similarly, you either use an existing project or create a new one in OpenShift cluster.
A namespace or project is a cluster resource. So, it can only be created by a Cluster Administrator. Refer the following links for more details -
For Kubernetes - Namespaces
For Red Hat OpenShift - Working with projects
Connect:Direct for UNIX Container has been integrated with IBM Licensing and Metering service using Operator. You need to install this service. For more information, refer to License Service deployment without an IBM Cloud Pak.
Installing and configuring IBM Licensing and Metering service
License Service is required for monitoring and measuring license usage of IBM Sterling Connect:Direct for UNIX Container in accordance with the pricing rule for containerized environments. Manual license measurements are not allowed. Deploy License Service on all clusters where IBM Sterling Connect:Direct for UNIX Container is installed.
IBM Sterling Connect:Direct for UNIX Container contains an integrated service for measuring the license usage at the cluster level for license evidence purposes.
Overview
The integrated licensing solution collects and stores the license usage information which can be used for audit purposes and for tracking license consumption in cloud environments. The solution works in the background and does not require any configuration. Only one instance of the License Service is deployed per cluster regardless of the number of containerized products that you have installed on the cluster.
Deploying License Service
Deploy License Service on each cluster where IBM Fast Healthcare Interoperability Resources (FHIR) Server is installed. License Service can be deployed on any Kubernetes based orchestration cluster. For more information about License Service, how to install and use it, see the License Service documentation.
Validating if License Service is deployed on the cluster
To ensure continuity in license reporting for compliance purposes, verify that the License Service is successfully deployed. It’s a good practice to periodically check that the service is active.
To confirm that the License Service is deployed and running on your Kubernetes or Red Hat OpenShift cluster, you can log in and run the following command:
kubectl get pods --all-namespaces | grep ibm-licensing | grep -v operator
oc get pods --all-namespaces | grep ibm-licensing | grep -v operator
The following response is a confirmation of successful deployment:
1/1 Running
Archiving license usage data
Remember to archive the license usage evidence before you decommission the cluster where IBM Sterling Connect:Direct for UNIX Container Server was deployed. Retrieve the audit snapshot for the period when IBM Sterling Connect:Direct for UNIX Container was on the cluster and store it in case of audit.
For more information about the licensing solution, see License Service documentation.
Downloading IBM Sterling Connect:Direct for UNIX Container
Before you install IBM Sterling Connect:Direct for UNIX Container, ensure that the installation files are available on your client system.
Depending on the availability of internet on the cluster, the following procedures can be followed. Choose the one which applies best for your environment.
Online Cluster
- Create the entitled registry secret: Complete the following steps to create a
secret with the entitled registry key value:
- Ensure that you have obtained the entitlement key that is assigned to your ID.
- Log in to My IBM Container Software Library by using the IBM ID and password that are associated with the entitled software.
- In the Entitlement keys section, select Copy key to copy the entitlement key to the clipboard.
- Save the entitlement key to a safe location for later
use.To confirm that your entitlement key is valid, click View library that is provided in the left of the page. You can view the list of products that you are entitled to. If IBM Connect:Direct for Unix is not listed, or if the View library link is disabled, it indicates that the identity with which you are logged in to the container library does not have an entitlement for IBM Connect:Direct for Unix. In this case, the entitlement key is not valid for installing the software.
Note: For assistance with the Container Software Library (e.g. product not available in the library; problem accessing your entitlement registry key), contact MyIBM Order Support. - Set the entitled registry information by completing the following steps:
- Log on to machine from where the cluster is accessible
- export ENTITLED_REGISTRY=cp.icr.io
- export ENTITLED_REGISTRY_USER=cp
- export ENTITLED_REGISTRY_KEY=<entitlement_key>
- This step is optional. Log on to the entitled registry with the following docker
login
command:
docker login "$ENTITLED_REGISTRY" -u "$ENTITLED_REGISTRY_USER" -p "$ENTITLED_REGISTRY_KEY" - Create a Docker-registry
secret:
kubectl create secret docker-registry ibm-entitlement-key \ --docker-username=$ENTITLED_REGISTRY_USER \ --docker-password=$ENTITLED_REGISTRY_KEY \ --docker-server=$ENTITLED_REGISTRY -n <your namespace/project name>Note: Create the secret with the name ibm-entitlement-key. This name is required because it’s referenced in the service account. - Update the service account or helm chart image pull secret configurations using
image.imageSecretsparameter with the above secret name.
- Ensure that you have obtained the entitlement key that is assigned to your ID.
- Download the Helm chart: You can follow the steps below to download the helm
chart from the repository.
- Make sure that the helm client (CLI) is present on your machine. Execute or run helm
CLI on machine and you should be able to see the usage of helm
CLI.
helm - Check the
ibm-helmrepository in your helm CLI.
If thehelm repo listibm-helmrepository already exists with URLhttps://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm, then update the local repository else add the repository. - Update the local repository, if
ibm-helmrepository already exists on helm CLI.helm repo update - Add the helm chart repository to local helm CLI if it does not
exist.
helm repo add ibm-helm https://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm - List ibm-connect-direct helm charts available on
repository.
helm search repo -l ibm-connect-direct - Download the latest helm
chart.
At this point, we have a locally present helm chart and an Entitled registry secret. Make sure you configure the helm chart to use the Entitled registry secret to download the required container image for deploying the IBM Connect:Direct for UNIX chart.helm pull ibm-helm/ibm-connect-directAt this stage, ensure that the Helm chart is configured to use the entitled registry secret. This allows the chart to download the required container image for deploying IBM Connect:Direct for UNIX. The Helm chart and the entitled registry secret should be available locally.
- Make sure that the helm client (CLI) is present on your machine. Execute or run helm
CLI on machine and you should be able to see the usage of helm
CLI.
Offline (Airgap) Cluster
You have a Kubernetes or OpenShift cluster but it is a private cluster which means it does not have the internet access. Depending upon the cluster, follow the below procedures to get the installation files.
For Kubernetes Cluster
- Get an RHEL machine which has
- Download the Helm chart by following the steps mentioned in the Online installation section.
- Extract the downloaded helm
chart.
tar -zxf <ibm-connect-direct-helm chart-name> - Get the container image
detail:
erRepo=$(grep -w "repository:" ibm-connect-direct/values.yaml |cut -d '"' -f 2)erTag=$(grep -w "tag:" ibm-connect-direct/values.yaml | cut -d '"' -f 2)erImgTag=$erRepo:$erTag - This step is optional if you already have a docker registry running on this machine. Create a docker registry on this machine. Follow Setting up your registry server.
- Get the Entitled registry entitlement key by following steps a and b explained in Online Cluster under Create the entitled registry section.
- Get the container image downloaded in docker
registry:
docker login "$ENTITLED_REGISTRY" -u "$ENTITLED_REGISTRY_USER" -p "$ENTITLED_REGISTRY_KEY"docker pull $erImgTagNote: Skip step 8, 9 and 10, if the cluster where deployment will be performed is accessible from this machine and cluster can fetch container images from registry running on this machine. - Save the container
image.
docker save -o <container image file name.tar> $erImgTag - Copy or transfer the installation files to your cluster. At this point you have both downloaded container image and helm chart for IBM Connect:Direct for UNIX. You need to transfer these two file to a machine from where you can access your cluster and its registry.
- After transferring the files, load the container image into your
registry.
docker load -i <container image file name.tar>
For Red Hat OpenShift Cluster
If your cluster is not connected to the internet, the deployment can be done in your cluster via connected or disconnected mirroring.
If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring. If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring.
Before you begin
Prerequisites
- Red Hat® OpenShift® Container Platform requires you to have cluster admin access to run the deployment.
- A Red Hat® OpenShift® Container Platform cluster must be installed.
Prepare a host
If you are in an air-gapped environment, you must be able to connect a host to the internet and mirror registry for connected mirroring or mirror images to file system which can be brought to a restricted environment for disconnected mirroring. For information on the latest supported operating systems, see ibm-pak plugin install documentation.
| Software | Purpose |
|---|---|
| Docker | Container management |
| Podman | Container management |
| Red Hat OpenShift CLI (oc) | Red Hat OpenShift Container Platform administration |
- Install Docker or Podman.To install Docker (for example, on Red Hat® Enterprise Linux®), run the following commands:Note: If you are installing as a non-root user you must use sudo. For more information, refer to the Podman or Docker documentation for installing as a non-root user.
yum check-update yum install dockerTo install Podman, see Podman Installation Instructions.
- Install the
ocRed Hat® OpenShift® Container Platform CLI tool. - Download and install the most recent version of IBM Catalog Management Plug-in for IBM
Cloud Paks from the IBM/ibm-pak. Extract the binary file by entering
the following command:
tar -xf oc-ibm_pak-linux-amd64.tar.gzRun the following command to move the file to the /usr/local/bin directory:Note: If you are installing as a non-root user you must use sudo. For more information, refer to the Podman or Docker documentation for installing as a non-root user.mv oc-ibm_pak-linux-amd64 /usr/local/bin/oc-ibm_pakNote: Download the plug-in based on the host operating system. You can confirm thatoc ibm-pak -his installed by running the following command:oc ibm-pak --helpThe plug-in usage is displayed.
For more information on plug-in commands, see command-help.
Your host is now configured and you are ready to mirror your images.
Creating registry namespaces
Top-level namespaces are the namespaces which appear at the root path of your private
registry. For example, if your registry is hosted at myregistry.com:5000,
then mynamespace in myregistry.com:5000/mynamespace is
defined as a top-level namespace. There can be many top-level namespaces.
When the images are mirrored to your private registry, it is required that the top-level namespace where images are getting mirrored already exists or can be automatically created during the image push. If your registry does not allow automatic creation of top-level namespaces, you must create them manually.
When you generate mirror manifests, you can specify the top-level namespace where you want
to mirror the images by setting TARGET_REGISTRY to
myregistry.com:5000/mynamespace which has the benefit of needing to
create only one namespace mynamespace in your registry if it does not allow automatic
creation of namespaces. The top-level namespaces can also be provided in the final registry
by using --final-registry.
If you do not specify your own top-level namespace, the mirroring process will use the ones
which are specified by the CASEs. For example, it will try to mirror the images at
myregistry.com:5000/cp etc.
- cp
There can be more top-level namespaces that you might need to create. See section on Generate mirror manifests for information on how to
use the oc ibm-pak describe command to list all the top-level
namespaces.
Set environment variables and download CASE files
If your host must connect to the internet via a proxy, you must set environment variables on the machine that accesses the internet via the proxy server.
export https_proxy=http://proxy-server-hostname:port
export http_proxy=http://proxy-server-hostname:port
# Example:
export https_proxy=http://server.proxy.xyz.com:5018
export http_proxy=http://server.proxy.xyz.com:5018
- Create the following environment variables with the installer image name and the
version.
export CASE_NAME=ibm-connect-directTo find the CASE name and version, see IBM: Product CASE to Application Version.
- Connect your host to the intranet.
- The plug-in can detect the locale of your environment and provide textual helps and
messages accordingly. You can optionally set the locale by running the following
command:
oc ibm-pak config locale -l LOCALEwhere LOCALE can be one of
de_DE, en_US, es_ES, fr_FR, it_IT, ja_JP, ko_KR, pt_BR, zh_Hans, zh_Hant. - Configure the plug-in to download CASEs as OCI artifacts from IBM Cloud Container
Registry
(ICCR).
oc ibm-pak config repo 'IBM Cloud-Pak OCI registry' -r oci:cp.icr.io/cpopen --enable - Enable color output (optional with v1.4.0 and
later)
oc ibm-pak config color --enable true - Download the image inventory for your IBM Cloud Pak to your host.Tip: If you do not specify the CASE version, it will download the latest CASE.
oc ibm-pak get \ $CASE_NAME \ --version $CASE_VERSION
By default, the root directory used by plug-in is ~/.ibm-pak. This means
that the preceding command will download the CASE under
~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION. You can configure this root
directory by setting the IBMPAK_HOME environment variable. Assuming
IBMPAK_HOME is set, the preceding command will download the CASE under
$IBMPAK_HOME/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION.
The logs files will be available at
$IBMPAK_HOME/.ibm-pak/logs/oc-ibm_pak.log.
Your host is now configured and you are ready to mirror your images.
Mirroring images to your private container registry
The process of mirroring images takes the image from the internet to your host, then effectively copies that image to your private container registry. After you mirror your images, you can configure your cluster and complete air-gapped installation.
- Generate mirror manifests
- Authenticating the registry
- Mirror images to final location
- Configure the cluster
- Install IBM Cloud® Paks by way of Red Hat OpenShift Container Platform
Generate mirror manifests
-
If you want to install subsequent updates to your air-gapped environment, you must do a
CASE getto get the image list when performing those updates. A registry namespace suffix can optionally be specified on the target registry to group mirrored images. -
Define the environment variable
$TARGET_REGISTRYby running the following command:export TARGET_REGISTRY=<target-registry>The
<target-registry>refers to the registry (hostname and port) where your images will be mirrored to and accessed by the oc cluster. For example setting TARGET_REGISTRY tomyregistry.com:5000/mynamespacewill create manifests such that images will be mirrored to the top-level namespacemynamespace. - Run the following commands to generate mirror manifests to be used when mirroring from a bastion host (connected mirroring):Example
oc ibm-pak generate mirror-manifests \ $CASE_NAME \ $TARGET_REGISTRY \ --version $CASE_VERSION~/.ibm-pakdirectory structure for connected mirroringThe~/.ibm-pakdirectory structure is built over time as you save CASEs and mirror. The following tree shows an example of the~/.ibm-pakdirectory structure for connected mirroring:tree ~/.ibm-pak /root/.ibm-pak ├── config │ └── config.yaml ├── data │ ├── cases │ │ └── YOUR-CASE-NAME │ │ └── YOUR-CASE-VERSION │ │ ├── XXXXX │ │ ├── XXXXX │ └── mirror │ └── YOUR-CASE-NAME │ └── YOUR-CASE-VERSION │ ├── catalog-sources.yaml │ ├── image-content-source-policy.yaml │ └── images-mapping.txt └── logs └── oc-ibm_pak.logNote: A new directory~/.ibm-pak/mirroris created when you issue theoc ibm-pak generate mirror-manifestscommand. This directory holds theimage-content-source-policy.yaml,images-mapping.txt, andcatalog-sources.yamlfiles.Tip: If you are using a Red Hat® Quay.io registry and need to mirror images to a specific organization in the registry, you can target that organization by specifying:export ORGANIZATION=<your-organization> oc ibm-pak generate mirror-manifests $CASE_NAME $TARGET_REGISTRY/$ORGANIZATION --version $CASE_VERSION
--final-registry: oc ibm-pak generate mirror-manifests \
$CASE_NAME \
$INTERMEDIATE_REGISTRY \
--version $CASE_VERSION
--final-registry $FINAL_REGISTRY
In this case, in place of a single mapping file (images-mapping.txt), two mapping files are created.
- images-mapping-to-registry.txt
- images-mapping-from-registry.txt
- Run the following commands to generate mirror manifests to be used when mirroring from a file system (disconnected mirroring):Example
oc ibm-pak generate mirror-manifests \ $CASE_NAME \ file://local \ --final-registry $TARGET_REGISTRY~/.ibm-pakdirectory structure for disconnected mirroringThe following tree shows an example of the~/.ibm-pakdirectory structure for disconnected mirroring:tree ~/.ibm-pak /root/.ibm-pak ├── config │ └── config.yaml ├── data │ ├── cases │ │ └── ibm-cp-common-services │ │ └── 1.9.0 │ │ ├── XXXX │ │ ├── XXXX │ └── mirror │ └── ibm-cp-common-services │ └── 1.9.0 │ ├── catalog-sources.yaml │ ├── image-content-source-policy.yaml │ ├── images-mapping-to-filesystem.txt │ └── images-mapping-from-filesystem.txt └── logs └── oc-ibm_pak.logNote: A new directory~/.ibm-pak/mirroris created when you issue theoc ibm-pak generate mirror-manifestscommand. This directory holds theimage-content-source-policy.yaml,images-mapping-to-filesystem.txt,images-mapping-from-filesystem.txt, andcatalog-sources.yamlfiles.
--filter argument and image
grouping. The --filter argument provides the ability to customize which
images are mirrored during an air-gapped installation. As an example for this functionality
ibm-cloud-native-postgresql CASE can be used, which contains groups that
allow mirroring specific variant of ibm-cloud-native-postgresql (Standard
or Enterprise). Use the --filter argument to target a variant of
ibm-cloud-native-postgresql to mirror rather than the entire library. The
filtering can be applied for groups and architectures. Consider the following
command: oc ibm-pak generate mirror-manifests \
ibm-cloud-native-postgresql \
file://local \
--final-registry $TARGET_REGISTRY \
--filter $GROUPS
The command was updated with a --filter argument. For example, for
$GROUPS equal to ibmEdbStandard the mirror manifests
will be generated only for the images associated with
ibm-cloud-native-postgresql in its Standard variant. The resulting image
group consists of images in the ibm-cloud-native-postgresql image group as
well as any images that are not associated with any groups. This allows products to include
common images as well as the ability to reduce the number of images that you need to
mirror.
oc ibm-pak describe $CASE_NAME --version $CASE_VERSION --list-mirror-images
- Mirroring Details from Source to Target Registry
-
Mirroring Details from Target to Final Registry. A connected mirroring path that does not involve a intermediate registry will only have the first section.
Note down the
Registries foundsub sections in the preceding command output. You will need to authenticate against those registries so that the images can be pulled and mirrored to your local registry. See the next steps on authentication. TheTop level namespaces foundsection shows the list of namespaces under which the images will be mirrored. These namespaces should be created manually in your registry (which appears in the Destination column in the above command output) root path if your registry does not allow automatic creation of namespaces.
Authenticating the registry
Complete the following steps to authenticate your registries:
-
Store authentication credentials for all source Docker registries.
Your product might require one or more authenticated registries. The following registries require authentication:
cp.icr.ioregistry.redhat.ioregistry.access.redhat.com
You must run the following command to configure credentials for all target registries that require authentication. Run the command separately for each registry:
Note: Theexport REGISTRY_AUTH_FILEcommand only needs to run once.export REGISTRY_AUTH_FILE=<path to the file which will store the auth credentials generated on podman login> podman login <TARGET_REGISTRY>Important: When you log in tocp.icr.io, you must specify the user ascpand the password which is your Entitlement key from the IBM Cloud Container Registry. For example:podman login cp.icr.io Username: cp Password: Login Succeeded!
For example, if you export REGISTRY_AUTH_FILE=~/.ibm-pak/auth.json, then
after performing podman login, you can see that the file is populated with
registry credentials.
docker login, the authentication file is typically located at
$HOME/.docker/config.json on Linux or
%USERPROFILE%/.docker/config.json on Windows. After docker
login you should export REGISTRY_AUTH_FILE to point to that
location. For example in Linux you can issue the following
command:export REGISTRY_AUTH_FILE=$HOME/.docker/config.json
| Directory | Description |
|---|---|
~/.ibm-pak/config |
Stores the default configuration of the plug-in and has information about the public GitHub URL from where the cases are downloaded. |
~/.ibm-pak/data/cases |
This directory stores the CASE files when they are downloaded by issuing the
oc ibm-pak get command. |
~/.ibm-pak/data/mirror |
This directory stores the image-mapping files, ImageContentSourcePolicy
manifest in image-content-source-policy.yaml and CatalogSource
manifest in one or more catalog-sourcesXXX.yaml. The files
images-mapping-to-filesystem.txt and
images-mapping-from-filesystem.txt are input to the oc
image mirror command, which copies the images to the file system and from
the file system to the registry respectively. |
~/.ibm-pak/data/logs |
This directory contains the oc-ibm_pak.log file, which
captures all the logs generated by the plug-in. |
Mirror images to final location
Complete the steps in this section on your host that is connected to both the local Docker registry and the Red Hat® OpenShift® Container Platform cluster.
-
Mirror images to the final location.
-
For mirroring from a bastion host (connected mirroring):
Mirror images to theTARGET_REGISTRY:oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \ --filter-by-os '.*' \ -a $REGISTRY_AUTH_FILE \ --insecure \ --skip-multiple-scopes \ --max-per-registry=1 \ --continue-on-error=trueIf you generated manifests in the previous steps to mirror images to an intermediate registry server followed by a final registry server, run the following commands:
- Mirror images to the intermediate registry
server:
oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-registry.txt \ --filter-by-os '.*' \ -a $REGISTRY_AUTH_FILE \ --insecure \ --skip-multiple-scopes \ --max-per-registry=1 \ --continue-on-error=true - Mirror images from the intermediate registry server to the final registry
server:
oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-registry.txt \ --filter-by-os '.*' \ -a $REGISTRY_AUTH_FILE \ --insecure \ --skip-multiple-scopes \ --max-per-registry=1 \ --continue-on-error=trueThe
oc image mirror --helpcommand can be run to see all the options available on the mirror command. Note that we usecontinue-on-errorto indicate that the command should try to mirror as much as possible and continue on errors.oc image mirror --helpNote: Sometimes based on the number and size of images to be mirrored, theoc image mirrormight take longer. If you are issuing the command on a remote machine it is recommended that you run the command in the background with a nohup so even if network connection to your remote machine is lost or you close the terminal the mirroring will continue. For example, the below command will start the mirroring process in background and write the log tomy-mirror-progress.txt.nohup oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \ -a $REGISTRY_AUTH_FILE \ --filter-by-os '.*' \ --insecure \ --skip-multiple-scopes \ --max-per-registry=1 \ --continue-on-error=true > my-mirror-progress.txt 2>&1 &You can view the progress of the mirror by issuing the following command on the remote machine:tail -f my-mirror-progress.txt
- Mirror images to the intermediate registry
server:
- For mirroring from a file system (disconnected mirroring):Mirror images to your file system:
export IMAGE_PATH=<image-path> oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-filesystem.txt \ --filter-by-os '.*' \ -a $REGISTRY_AUTH_FILE \ --insecure \ --skip-multiple-scopes \ --max-per-registry=1 \ --continue-on-error=true \ --dir "$IMAGE_PATH"The
<image-path>refers to the local path to store the images. For example, in the previous section if providedfile://localas input during generate mirror-manifests, then the preceding command will create a subdirectory v2/local inside directory referred by<image-path>and copy the images under it.
The following command can be used to see all the options available on the mirror command. Note that
continue-on-erroris used to indicate that the command should try to mirror as much as possible and continue on errors.oc image mirror --helpNote: Sometimes based on the number and size of images to be mirrored, theoc image mirrormight take longer. If you are issuing the command on a remote machine, it is recommended that you run the command in the background withnohupso that even if you lose network connection to your remote machine or you close the terminal, the mirroring will continue. For example, the following command will start the mirroring process in the background and write the log tomy-mirror-progress.txt.export IMAGE_PATH=<image-path> nohup oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-to-filesystem.txt \ --filter-by-os '.*' \ -a $REGISTRY_AUTH_FILE \ --insecure \ --skip-multiple-scopes \ --max-per-registry=1 \ --continue-on-error=true \ --dir "$IMAGE_PATH" > my-mirror-progress.txt 2>&1 &You can view the progress of the mirror by issuing the following command on the remote machine:
tail -f my-mirror-progress.txt -
- For disconnected mirroring only: Continue to move the following items to your
file system:
- The
<image-path>directory you specified in the previous step - The
authfile referred by$REGISTRY_AUTH_FILE ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-filesystem.txt
- The
- For disconnected mirroring only: Mirror images to the target registry from file
system.
Complete the steps in this section on your file system to copy the images from the file system to the
$TARGET_REGISTRY. Your file system must be connected to the target docker registry.Important: If you used the placeholder value ofTARGET_REGISTRYas a parameter to--final-registryat the time of generating mirror manifests, then before running the following command, find and replace the placeholder value ofTARGET_REGISTRYin the file,images-mapping-from-filesystem.txt, with the actual registry where you want to mirror the images. For example, if you want to mirror images tomyregistry.com/mynamespacethen replaceTARGET_REGISTRYwithmyregistry.com/mynamespace.- Run the following command to copy the images (referred in the
images-mapping-from-filesystem.txtfile) from the directory referred by<image-path>to the final target registry:export IMAGE_PATH=<image-path> oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping-from-filesystem.txt \ -a $REGISTRY_AUTH_FILE \ --from-dir "$IMAGE_PATH" \ --filter-by-os '.*' \ --insecure \ --skip-multiple-scopes \ --max-per-registry=1 \ --continue-on-error=true
- Run the following command to copy the images (referred in the
Configure the cluster
- Update the global image pull secret for your Red Hat OpenShift cluster. Follow the steps
in Updating the global cluster pull secret.
The documented steps in the link enable your cluster to have proper authentication credentials in place to pull images from your
TARGET_REGISTRYas specified in theimage-content-source-policy.yamlwhich you will apply to your cluster in the next step. - Create ImageContentSourcePolicyImportant:
- Before you run the command in this step, you must be logged into your OpenShift
cluster. Using the
oc logincommand, log in to the Red Hat OpenShift Container Platform cluster where your final location resides. You can identify your specific oc login by clicking the user drop-down menu in the Red Hat OpenShift Container Platform console, then clicking Copy Login Command.- If you used the placeholder value of
TARGET_REGISTRYas a parameter to--final-registryat the time of generating mirror manifests, then before running the following command, find and replace the placeholder value ofTARGET_REGISTRYin file,~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yamlwith the actual registry where you want to mirror the images. For example, replaceTARGET_REGISTRYwithmyregistry.com/mynamespace.
- If you used the placeholder value of
Run the following command to create ImageContentSourcePolicy:
oc apply -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/image-content-source-policy.yamlIf you are using Red Hat OpenShift Container Platform version 4.7 or earlier, this step might cause your cluster nodes to drain and restart sequentially to apply the configuration changes.
- Before you run the command in this step, you must be logged into your OpenShift
cluster. Using the
- Verify that the ImageContentSourcePolicy resource is
created.
oc get imageContentSourcePolicy - Verify your cluster node status and wait for all the nodes to be restarted before
proceeding.
oc get MachineConfigPool$ oc get MachineConfigPool -w NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-53bda7041038b8007b038c08014626dc True False False 3 3 3 0 10d worker rendered-worker-b54afa4063414a9038958c766e8109f7 True False False 3 3 3 0 10dAfter the
ImageContentsourcePolicyand global image pull secret are applied, the configuration of your nodes will be updated sequentially. Wait until allMachineConfigPoolsare in theUPDATED=Truestatus before proceeding. - Go to the project where deployment has to be done:Note: You must be logged into a cluster before performing the following steps.
export NAMESPACE=<YOUR_NAMESPACE>oc new-project $NAMESPACE - Optional: If you use an insecure registry, you must add the target registry to
the cluster insecureRegistries
list.
oc patch image.config.openshift.io/cluster --type=merge \ -p '{"spec":{"registrySources":{"insecureRegistries":["'${TARGET_REGISTRY}'"]}}}' - Verify your cluster node status and wait for all the nodes to be restarted before
proceeding.
oc get MachineConfigPool -wAfter the
ImageContentsourcePolicyand global image pull secret are applied, the configuration of your nodes will be updated sequentially. Wait until allMachineConfigPoolsare updated.At this point your cluster is ready for IBM Connect:Direct for UNIX deployment. The helm chart is present in
~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION/charts/ibm-connect-direct-1.2.x.tgzdirectory. Use it for deployment. Copy it in current directory.cp ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION/charts/ibm-connect-direct-1.2.x.tgz .Note: Replace with version information in above command. - Configuration required in Helm chart: To use the image mirroring in OpenShift cluster, helm chart should be configured to use the digest value for referring to container image. Set image.digest.enabled to true in values.yaml file or pass this parameter using Helm CLI.
Setting up a repeatable mirroring process
Once you complete a CASE save, you can mirror the CASE as
many times as you want to. This approach allows you to mirror a specific version of the IBM
Cloud Pak into development, test, and production stages using a private container
registry.
Follow the steps in this section if you want to save the CASE to multiple
registries (per environment) once and be able to run the CASE in the future
without repeating the CASE save process.
- Run the following command to save the
CASEto ~/.ibm-pak/data/cases/$CASE_NAME/$CASE_VERSION which can be used as an input during the mirror manifest generation:oc ibm-pak get \ $CASE_NAME \ --version $CASE_VERSION - Run the
oc ibm-pak generate mirror-manifestscommand to generate theimage-mapping.txt:oc ibm-pak generate mirror-manifests \ $CASE_NAME \ $TARGET_REGISTRY \ --version $CASE_VERSIONThen add theimage-mapping.txtto theoc image mirrorcommand:oc image mirror \ -f ~/.ibm-pak/data/mirror/$CASE_NAME/$CASE_VERSION/images-mapping.txt \ --filter-by-os '.*' \ -a $REGISTRY_AUTH_FILE \ --insecure \ --skip-multiple-scopes \ --max-per-registry=1 \ --continue-on-error=true
If you want to make this repeatable across environments, you can reuse the same saved
CASE cache (~/.ibm-pak/$CASE_NAME/$CASE_VERSION) instead of executing a
CASE save again in other environments. You do not have to worry about
updated versions of dependencies being brought into the saved cache.
Image Signature Verification
To verify that only IBM-signed images are pulled into the cluster’s registry, set up image signature verification. This step is optional.
Prerequisites
- Access to Entitled Registry (ER)
- The
skopeopackage is installed. Refer to this https://github.com/containers/skopeo for installation guidance. - Access to the destination image repository where images will be pulled.
- Download and Extract Key and Certificates:
- Download the ZIP file and extract its contents. To download the ZIP file, click here.
-
- Extracted Files:
connectdirectkey.pub.gpg– Public key for verifying the image signature
Verifying the Image
- Automatic Signature Verification:
- Policy Configuration: Create or update the policy file
located at
/etc/containers/policy.jsonand set it according to the configuration shared in (YAML format).{ "default": [ { "type": "reject" } ], "transports": { "docker": { "cp.icr.io/cp/ibm-connectdirect": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/path/to/connectdirectkey.pub.gpg" } ] } } }Note: For unsigned images, set"type":"insecureAcceptAnything"for IBM Production Entitled Registry in the/etc/containers/policy.jsonfile. - Pull the Image: Use the following command to pull images to
your local
registry:
skopeo copy docker://cp.icr.io/cp/ibm-connectdirect/<imagename>:<tag> docker://<local_repository>:<tag> --src-creds iamapikey:key --dest-creds username:password
- Policy Configuration: Create or update the policy file
located at
- Manual Signature Verification:
- Copy the Public Key to the local filesystem for verification
–
connectdirectkey.pub.gpg. - Import the Public Key to the GPG
keystore:
sudo gpg --import path/to/connectdirectkey.pub.gpg - Get the fingerprint for the imported key
sudo gpg -kor
export FINGERPRINT=$(gpg --fingerprint --with-colons | grep fpr | tr -d 'fpr:') - Pull the Image
Locally:
skopeo copy docker://cp.icr.io/cp/ibm-connectdirect/<imagename>:<tag> dir:<imagedir> --src-creds="iamapikey:key" - Verify the
Signature:
skopeo standalone-verify <imagedir>/manifest.json <local_image_reference/repo:tag> <gpgkeyfingerprint> <imagedir>/signature - Expected
Result:
Signature verified, digest sha256:<xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- Copy the Public Key to the local filesystem for verification
–
- Certificate Verification
- Compare the Certificate and Public Key:
- Display the certificate
details:
openssl x509 -text -in connectdirectkey.pem.cer ##shows the certificate details, e.g. it is signed by IBM and Digicert - Display public key
details:
gpg -v --list-packets connectdirectkey.pub.gpg #shows the public key details
Modulus and Exponent Verification:
Certificate Modulus:00:cf:61:02:67:1b:90:09:34:0b:be:f8:8b:16:2f: 5a:73:57:ab:02:a2:42:a3:05:ee:9b:ec:40:8c:b7: Exponent: 65537 (0x10001)Public key:... pkey[0]: CF6102671B9009340BBEF88B162F5A7357AB02A242A305EE9B pkey[1]: 010001- Ensure that the public key modulus and exponent match the certificate’s details.
- Display the certificate
details:
- Certificate Validity
Check:
openssl ocsp -no_nonce -issuer CertKeyAlias.pem.chain -cert CertKeyAlias.pem.cer -VAfile CertKeyAlias.pem.chain -text -url http://ocsp.digicert.com -respout ocsptest #check if the cert is still valid
Note: The certificate is refreshed every two years. - Compare the Certificate and Public Key:
Applying Pod Security Standard Kubernetes Cluster
Pod Security Standard should be applied to the namespace if Kubernetes cluster v1.25 and above is used. This helm chart has been certified with baseline security standards with enforce security level. For more details, refer to Pod Security Standards.
In Standard Privilege Mode (SPM), this Helm chart complies with the restricted
security standard and enforces security settings. This ensures that all workloads deploy
using the restricted-v2 Pod Security Standard, providing a higher
level of security and isolation.
kubectl label namespace <namespace_name> \
pod-security.kubernetes.io/enforce=restrictedCreating security context constraints for Red Hat OpenShift Cluster
- The IBM Connect:Direct for Unix chart requires an SecurityContextConstraints (SCC) to be tied to
the target namespace prior to
deployment.Based on your organization security policy, you may need to decide the security context constraints for your OpenShift cluster. This chart has been verified on privileged SCC which comes with Redhat OpenShift. For more info, please refer to this link.IBM Sterling Connect:Direct for UNIX Container requires a custom SCC which is the minimum set of permissions/capabilities needed to deploy this helm chart and the Connect Direct for Unix services to function properly. It is based on the predefined restricted SCC with extra required privileges. This is the recommended SCC for this chart and it can be created by the cluster administrator. The cluster administrator can either use the snippets given below or the scripts provided in the Helm chart to create the SCC, cluster role and tie it to the project where deployment will be performed. In both the cases, same SCC and cluster role will be created. It is recommended to use the scripts in the Helm chart so that required SCC and cluster role is created without any issue.Attention: If Standard User Mode feature is enabled, SCC will be slightly different. For more information, refer to the below SCC.
- In Standard Privilege Mode (SPM), setting
MinimallySufficientPodSecurityStandard=restrictedensures that workloads deploy using therestricted-v2Pod Security Standard, which provides a higher level of security.Apply the restricted policy to a namespace by running the following command:oc annotate namespace <namespace_name> \ security.openshift.io/MinimallySufficientPodSecurityStandard=restricted \ --overwrite - When deploying this Helm chart on OpenShift, configure the namespace with UID and GID
ranges that match the security values defined in the chart’s
values.yaml. This ensures that the workload runs successfully under OpenShift Security Context Constraints (SCCs).Values from thevalues.yamlfile:
The UID and GID ranges for the namespace must include UID/GID 45678, which is the cdadmin user’s UID/GID.storageSecurity: fsGroup: 45678 supplementalGroups: [65534] runAsUser: 45678 runAsGroup: 45678Ensure that the namespace includes the required UID and GID ranges. For example, run:oc annotate namespace <namespace_name> \ openshift.io/sa.scc.uid-range="40000/10000" \ openshift.io/sa.scc.supplemental-groups="40000/10000" \ --overwriteThis range (
40000–49999) covers the user and group IDs used in the chart.Verify the namespace configuration by running:oc describe ns <namespace_name>Check that the UID and GID ranges include the
runAsUser,runAsGroup, andfsGroupvalues from thevalues.yamlfile. - Below is the Custom
SecurityContextConstraints(SCC) snippet for Connect Direct for Unix operating in Standard User Mode. Fore more information, refer to Standard User Mode in IBM Sterling Connect:Direct for UNIX Container.apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: ibm-connect-direct-scc labels: app: "ibm-connect-direct-scc" allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegedContainer: false allowPrivilegeEscalation: true allowedCapabilities: - SETUID - SETGID - DAC_OVERRIDE - AUDIT_WRITE defaultAddCapabilities: [] defaultAllowPrivilegeEscalation: false forbiddenSysctls: - "*" fsGroup: type: MustRunAs ranges: - min: 1 max: 4294967294 readOnlyRootFilesystem: false requiredDropCapabilities: - ALL runAsUser: type: MustRunAsNonRoot seLinuxContext: type: MustRunAs supplementalGroups: type: MustRunAs ranges: - min: 1 max: 4294967294 volumes: - configMap - downwardAPI - emptyDir - nfs - persistentVolumeClaim - projected - secret priority: 0 - Below is the Custom
SecurityContextConstraintssnippet for Connect Direct for Unix operating in Super User Mode.apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: ibm-connect-direct-scc labels: app: "ibm-connect-direct-scc" allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegedContainer: false allowPrivilegeEscalation: true allowedCapabilities: - FOWNER - SETUID - SETGID - DAC_OVERRIDE - CHOWN - SYS_CHROOT - AUDIT_WRITE defaultAddCapabilities: [] defaultAllowPrivilegeEscalation: false forbiddenSysctls: - "*" fsGroup: type: MustRunAs ranges: - min: 1 max: 4294967294 readOnlyRootFilesystem: false requiredDropCapabilities: - ALL runAsUser: type: MustRunAsNonRoot seLinuxContext: type: MustRunAs supplementalGroups: type: MustRunAs ranges: - min: 1 max: 4294967294 volumes: - configMap - downwardAPI - emptyDir - nfs - persistentVolumeClaim - projected - secret priority: 0
- Custom ClusterRole for the custom
SecurityContextConstraintsapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: "ibm-connect-direct-scc" labels: app: "ibm-connect-direct-scc" rules: - apiGroups: - security.openshift.io resourceNames: - ibm-connect-direct-scc resources: - securitycontextconstraints verbs: - use
- From the command line, you can run the setup scripts included in the Helm chart (untar the
downloaded Helm chart archive).
ibm-connect-direct/ibm_cloud_pak/pak_extensions/pre-install/clusterAdministration/createSecurityClusterPrereqs.sh <pass 0 or 1 to disable/enable OUM feature>ibm-connect-direct/ibm_cloud_pak/pak_extensions/pre-install/namespaceAdministration/createSecurityNamespacePrereqs.sh <Project name where deployment will be perfromed>Note: If the above scripts are not executable, you will need to make the scripts executable by executing following commands:chmod u+x ibm-connect-direct/ibm_cloud_pak/pak_extensions/pre-install/namespaceAdministration/ createSecurityNamespacePrereqs.shchmod u+x ibm-connect-direct/ibm_cloud_pak/pak_extensions/pre-install/clusterAdministration/ createSecurityClusterPrereqs.sh
Creating storage for Data Persistence
- Kubernetes- Persistent Volumes
- Red Hat OpenShift- Persistent Volume Overview
- Azure- File Persistent Volume and Disk Persistent Volume
efs.csi.aws.com). The EFS CSI driver assigns a new
GID for each Persistent Volume (PV), which may cause permission issues (e.g., not matching
45678). To prevent this, use static provisioning to ensure consistent GID allocation.Connect:Direct for UNIX Container supports:
- Dynamic Provisioning using storage classes
- Pre-created Persistent Volume
- Pre-created Persistent Volume Claim
- The only supported access mode is `ReadWriteOnce`
Dynamic Provisioning
- persistence.useDynamicProvisioning- It must be set to true. By default, it is set to false, which means dynamic provisioning is disabled.
- pvClaim.storageClassName- The storage class is blank by default. Update this parameter value using valid storage class. Consult your cluster administrator for available storage class as required by this chart.
- secret.certSecretName- Specify the certificate secret required for Secure plus configuration or LDAP support. Update this parameter with valid certificate secret. Refer Creating secret for more information.
Invoke the following command to create the Storage Class:
kubectl apply -f <StorageClass yaml file>
Storage Class Azure Kubernetes Cluster
In Azure, we support Disk Persistent Volumes for dynamic
provisioning. The default storage class used for deployment is
managed-premium.
Disk Storage Class in Azure typically refers to block storage, such as Azure Managed Disks. This storage type is persistent and commonly attached to a single node, meaning it is not generally shared across multiple nodes or instances.
Node Affinity for Single-Node Scheduling: To enable disk sharing by scheduling all pods on a single node, configure Node Affinity in the deployment.
affinity section
in the values.yaml file as shown below:affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- aks-agentpool-15605297-vmss00000r
Explanation of matchExpressions: The
matchExpressions section defines the node label requirements for pod
scheduling. The key corresponds to the label's key, while
values includes the allowed values for that key. In this example, the pod
is scheduled to the node with the label kubernetes.io/hostname and the
value aks-agentpool-15605297-vmss00000r.
You can create a Storage Class to support dynamic provisioning. For different cloud storage
classes, refer to the sample YAML files available in the Helm chart path:
./ibm_cloud_pak/pak_extensions/pre-install/volume
This Storage Class uses the provisioner file.csi.azure.com, with
skuName set to Premium_LRS and protocol
set to nfs.
Premium_LRS in the YAML file
because the Premium SKU is required to support NFS. For more information, see Storage class parameters for dynamic
PersistentVolumes.Non-Dynamic Provisioning
Non-Dynamic Provisioning is supported using pre-created Persistent Volume and pre-created Persistent Volume Claim. The Storage Volume should have Connect:Direct for UNIX secure plus certificate files to be used for installation. Create a directory named "CDFILES" inside mount path and place certificate files in the created directory. Similarly, the LDAP certificates should be placed in same directory.
Using pre-created Persistent Volume- When creating Persistent Volume, make a note of the
storage class and metadata labels, that are required to configure Persistent Volume Claim's storage
class and label selector during deployment. This ensures that the claims are bound to Persistent
Volume based on label match. These labels can be passed to helm chart either by --set
flag or custom values.yaml file. The parameters defined
invalues.yaml for label name and its value are
pvClaim.selector.label and pvClaim.selector.value
respectively.
kind: PersistentVolume
apiVersion: v1
metadata:
name: <persistent volume name>
labels:
app.kubernetes.io/name: <persistent volume name>
app.kubernetes.io/instance: <release name>
app.kubernetes.io/managed-by: <service name>
helm.sh/chart: <chart name>
release: <release name>
purpose: cdconfig
spec:
storageClassName: <storage classname>
capacity:
storage: <storage size>
accessModes:
- ReadWriteOnce
nfs:
server: <NFS server IP address>
path: <mount path>kubectl create -f <peristentVolume yaml file>oc create -f <peristentVolume yaml file>Using pre-created Persistent Volume Claim (PVC)- The existing PVC can also be used for
deployment. The PV for PVC should have the certificate files as required for Connect:Direct for UNIX
secure plus or LDAP TLS configuration. The parameter for pre-created PVC is
pvClaim.existingClaimName. One should pass a valid PVC name to this parameter else
deployment would fail.
Apart from required Persistent Volume, you can bind extra storage mounts using the parameters
provided in values.yaml. These parameters are extraVolume and extraVolumeMounts.
This can be a host path or a NFS type.
- <install_dir>/work
- <install_dir>/ndm/security
- <install_dir>/ndm/cfg
- <install_dir>/ndm/secure+
- <install_dir>/process
- <install_dir>/file_agent/config
- <install_dir>/file_agent/log
Setting permission on storage
- Option A: The easiest and undesirable solution is to have open permissions on the NFS
exported directories.
chmod -R 777 <path-to-directory> - Option B: Alternatively, the permissions can be controlled at group level leveraging the supplementalGroups and fsGroup setting. For example - if we want to add GID to supplementalGroups or fsGroup, it can be done using storageSecurity.supplementalGroups or storageSecurity.fsGroup.
Root Squash NFS support
nfsnobody or nobody user on the system. So, you cannot perform operations like changing the ownership of any files or directories. Connect:Direct for UNIX helm chart can be deployed on root squash NFS. Since, the ownership of files or directories mounted in container would be mounted as nfsnobody or nobody. The POSIX group ID of the root squash NFS share should be added to Supplemental Group list statefulset using storageSecurity.supplementalGroup in values.yaml file. Similarly, if extra NFS share is mounted then proper read/write permission can be provide to container user using supplemental groups only.
Creating secret
Creating secrets using kubernetes resources
Passwords are used for KeyStore, by Administrator to connect to Connect:Direct server, and to decrypt certificates files.
To separate application secrets from the Helm Release, a Kubernetes secret must be created
based on the examples given below and be referenced in the Helm chart as
secret.secretName value.
- Create a template file with Secret defined as described in the example
below:
apiVersion: v1 kind: Secret metadata: name: <secret name> type: Opaque data: admPwd: <base64 encoded password> crtPwd: <base64 encoded password> keyPwd: <base64 encoded password> appUserPwd: <base64 encoded password>Here:admPwdrefers to the password that will be set for the Admin user 'cdadmin' after a successful deployment.crtPwdrefers to the passphrase of the identity certificate file passed in cdArgs.crtName for secure plus configuration.keyPwdrefers to the Key Store password.appUserPwdrefers to password for a non-admin Connect:Direct user. The password for this user is mandatory for IBM Connect:Direct for UNIX operating in Standard User Mode (SUM).- After the secret is created, delete the yaml file for security reasons.
Note: Base64 encoded passwords need to be generated manually by invoking the below command:
Use the output of this command in the <secret yaml file>.echo -n “<your desired password>” | base64 - Run the following command to create the
Secret:Kubernetes:
kubectl create -f <secret yaml file>OpenShift:oc create -f <secret yaml file>To check the secret created invoke the following command:kubectl get secretsFor more details see, Secrets.
Default Kubernetes secrets management has certain security risks as documented here, Kubernetes Security.
Users should evaluate Kubernetes secrets management based on their enterprise policy requirements and should take steps to harden security.
- For dynamic provisioning, one more secret resource needs to be created for all
certificates (secure plus
certificates and LDAP certificates). It can be created using below example as
required:Kubernetes:
kubectl create secret generic cd-cert-secret --from-file=certificate_file1=/path/to/certificate_file1 --from-file=certificate_file2=/path/to/certificate_file2OpenShift:oc create secret generic cd-cert-secret --from-file=certificate_file1=/path/to/certificate_file1 --from-file=certificate_file2=/path/to/certificate_file2Note:- The secret resource name created above. It should be referenced by Helm chart for dynamic provisioning using parameter `secret.certSecretName'.
- For the K8s secret object creation, ensure that the certificate files being used contain the identity certificate. Configure the parameter cdArgs.crtName with the certificate file having the appropriate file extension that corresponds to the identity certificate.
Creating secrets using External Secret Operator
IBM Sterling Connect:Direct for UNIX Container deployments support secure secret management. This feature enables the Connect:Direct for UNIX Container to securely retrieve and use secrets-such as API keys, credentials, and tokens-at runtime from supported cloud-based secret management solutions.
Supported Secret Providers:
- HashiCorp Vault
- AWS Secrets Manager
- Azure Key Vault
This integration allows for centralized and secure management of sensitive configuration data, helping to eliminate the need to hardcode secrets or store them in configuration files.
In the following section, you’ll learn how to configure and integrate these secret managers with your CDU container deployment.
HashiCorp Vault Integration with External Secrets:- Create OpenShift projects and add Helm
repositories:
oc new-project external-secrets helm repo add external-secrets https://external-secrets.github.io/kubernetes-external-secrets/oc new-project vault helm repo add hashicorp https://helm.releases.hashicorp.com - Switch to the Vault project and deploy Vault using
Helm:
oc project vaulthelm upgrade -i -n vault vault hashicorp/vault \ --set global.openshift=true \ --set server.dev.enabled=true \ --set injector.enabled=false \ --set server.image.repository=docker.io/hashicorp/vault - Configure Vault:
- Execute a remote shell into the Vault pod:
oc rsh vault-0 - Enable Kubernetes authentication inside the
pod:
vault auth enable kubernetes - Configure Vault to use the Kubernetes service account token and cluster
CA:
vault write auth/kubernetes/config \ token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \ kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \ kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \ issuer="https://kubernetes.default.svc" - Store secrets in Vault as key-value
pairs:
vault kv put secret/cd-secret admPwd="newpass_val" \ appUserPwd="password_val" \ crtPwd="password_val" \ keyPwd="password_val" - Create a policy to allow read access to the
secret:
vault policy write pmodemo - << EOF path "secret/data/cd-secret" \ { capabilities = ["read"] } EOF - Create Kubernetes roles binding service accounts to the
policy:
vault write auth/kubernetes/role/pmodemo1 \ bound_service_account_names=vault \ bound_service_account_namespaces=vault \ policies=pmodemo ttl=60mvault write auth/kubernetes/role/pmodemo \ bound_service_account_names=external-secrets-kubernetes-external-secrets \ bound_service_account_namespaces=external-secrets \ policies=pmodemo ttl=60m
- Execute a remote shell into the Vault pod:
- Verify Vault authentication:
- From the Vault-accessing pod, export the service account token and
authenticate:
OCP_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) wget --no-check-certificate -q -O- --post-data '{"jwt": "'"$OCP_TOKEN"'", "role": "pmodemo1"}' http://vault:8200/v1/auth/kubernetes/login - A valid token response confirms Vault is configured and authentication works.
- From the Vault-accessing pod, export the service account token and
authenticate:
- Install and configure External Secrets
operator:
oc project external-secrets helm upgrade -i -n external-secrets external-secrets external-secrets/kubernetes-external-secrets \ --set env.VAULT_ADDR=http://vault.vault.svc:8200 - Create an ExternalSecret manifest
(
extsecret1.yml):apiVersion: kubernetes-client.io/v1 kind: ExternalSecret metadata: name: cd-secret namespace: vault spec: backendType: vault data: - key: secret/data/cd-secret name: admPwd property: admPwd - key: secret/data/cd-secret name: appUserPwd property: appUserPwd - key: secret/data/cd-secret name: crtPwd property: crtPwd - key: secret/data/cd-secret name: keyPwd property: keyPwd vaultMountPoint: kubernetes vaultRole: pmodemoApply it with:oc create -f extsecret1.yml - Verify the created Kubernetes
secret:
kubectl -n vault get secret cd-secret -o yaml - Usage:
Once created, this secret can be referenced in your Helm charts or deployments as needed.
Furthermore, you can refer to this link for reference: https://www.redhat.com/en/blog/external-secrets-with-hashicorp-vault.
AWS Secrets Manager Integration with Amazon EKS using External Secrets Operator (ESO)
- Install External Secrets Operator
(ESO):
helm repo add external-secrets https://charts.external-secrets.iohelm install external-secrets external-secrets/external-secrets \ -n external-secrets --create-namespace \ --set installCRDs=true kubectl get pods \ -n external-secrets - Create IAM Policy for Secrets Access:Create a policy with the following permissions and attach it to the role used by ESO:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "kms:ListKeys", "kms:ListAliases", "secretsmanager:ListSecrets" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "kms:Decrypt", "kms:DescribeKey" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret", "secretsmanager:ListSecretVersionIds" ], "Resource": "*", "Condition": { "StringEquals": { "secretsmanager:ResourceTag/ekssecret": "${aws:PrincipalTag/ekssecret}" } } } ] } - Create Trust Relationship for EC2 Role:
{ "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" } - Create IAM ServiceAccount (IRSA) for
ESO:
eksctl create iamserviceaccount \ --name <ESO-service-account> \ --namespace <ESO-namespace> \ --cluster <cluster-name> \ --role-name <IRSA-name> \ --attach-policy-arn <policy-arn> \ --override-existing-serviceaccounts \ --approve - Configure Trust Policy for OIDC:
aws eks describe-cluster \ --name <cluster-name> \ --query "cluster.identity.oidc.issuer" \ --output textUpdate your trust policy (trust-policy.json) with the OIDC provider and ESO service account:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::<account-id>:oidc-provider/<OIDC-provider-url>" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "<OIDC-provider-url>:sub": "system:serviceaccount:external-secrets:external-secrets-sa" } } } ] }Apply the trust policy:aws iam update-assume-role-policy \ --role-name <role-name> \ --policy-document file://trust-policy.json - Annotate the ServiceAccount with IAM Role
ARN:
oc annotate serviceaccount external-secrets-sa \ eks.amazonaws.com/role-arn=arn:aws:iam::<account-id>:role/<role-name> \ -n external-secrets --overwrite - Create and Reference Secrets in AWS Secrets Manager:
Create your required secrets in AWS Secrets Manager. Ensure tagging aligns with IAM policy conditions.
- Create and Apply SecretStore:Sample
SecretStore.yaml:apiVersion: external-secrets.io/v1beta1 kind: SecretStore metadata: name: aws-secretsmanager namespace: external-secrets spec: provider: aws: service: SecretsManager region: us-east-1 sessionTags: - key: key value: valueApply theSecretStore.yamlusing:oc apply -f SecretStore.yaml - Create and Apply ExternalSecret:Sample
ExternalSecret.yaml:apiVersion: external-secrets.io/v1beta1 kind: ExternalSecret metadata: name: cd-secret namespace: external-secrets spec: refreshInterval: 1h secretStoreRef: name: aws-secretsmanager kind: SecretStore target: name: cd-secret creationPolicy: Owner data: - secretKey: admPwd remoteRef: key: password property: admPwd - secretKey: appUserPwd remoteRef: key: password property: appUserPwd - secretKey: crtPwd remoteRef: key: password property: crtPwd - secretKey: keyPwd remoteRef: key: password property: keyPwdApply theExternalSecret.yamlusing:oc apply -f ExternalSecret.yamlNote:- Ensure all resources are in the same namespace
(
external-secrets). - Verify correct AWS region and IAM permissions.
- Your Helm charts can now reference these ExternalSecrets.
- Ensure all resources are in the same namespace
(
For further reference, see https://aws.amazon.com/blogs/security/how-to-use-aws-secrets-manager-and-abac-for-enhanced-secrets-management-in-amazon-eks/.
Azure Key Vault with Secrets Store CSI Driver on AKS
This section describes how to securely retrieve secrets from Azure Key Vault using the Secrets Store CSI Driver on Azure Kubernetes Service (AKS).
- AKS cluster created
- Azure Key Vault created
- AKS–Key Vault connection established using Service Connector
- Create a SecretProviderClass and a pod
- Create Azure Key VaultUse the Azure CLI or Azure Portal:
az keyvault create --resource-group MyResourceGroup --name MyKeyVault --location EastUS - Add Secrets to Key VaultUse the Azure CLI or Portal:
az keyvault secret set --vault-name MyKeyVault --name ExampleSecret --value MyAKSExampleSecret - Connect AKS to Key Vault via Service ConnectorUsing the Azure Portal:
- Navigate to your AKS resource.
- Select Service Connector.
- Click Create and configure.
- After creation, view the connection details.
Note: Ensure you have the necessary permissions to create this connection. - Create a SecretProviderClassSave the following sample YAML as
secret_provider_class.yaml:apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: sc-demo-keyvault-csi spec: provider: azure parameters: usePodIdentity: "false" useVMManagedIdentity: "true" userAssignedIdentityID: <AZURE_KEYVAULT_CLIENTID> keyvaultName: <AZURE_KEYVAULT_NAME> objects: | array: - | objectName: <KEYVAULT_SECRET_NAME> objectType: secret tenantId: <AZURE_KEYVAULT_TENANTID>Replace the placeholders with actual values:
<AZURE_KEYVAULT_NAME>→ Your Key Vault name<AZURE_KEYVAULT_TENANTID>→ Tenant ID (from Service Connector)<AZURE_KEYVAULT_CLIENTID>→ Managed Identity Client ID (from Service Connector)<KEYVAULT_SECRET_NAME>→ Name of the secret in Key Vault
- Deploy the
SecretProviderClass
kubectl apply -f secret_provider_class.yaml - Reference the Secret in a PodAdd the following to your pod specification:
volumes: - name: cd-secret csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "{{ .Values.secret.secretName }}" - Verify the
Configuration
kubectl get secretproviderclasskubectl get secretproviderclass <name> -o yaml
For more details, refer to the following URL:
Configuring- Understanding values.yaml
values.yaml file in Helm charts used to complete the installation.
| Parameter | Description | Default Value |
|---|---|---|
| licenseType | Specify prod or non-prod for production or non-production license type respectively | prod |
| license | License agreement. Set true to accept the license. | false |
| env.extraEnvs | Specify extra environment variable if needed | |
| env.timezone | Timezone | UTC |
| arch | Node Architecture | amd64 |
| replicaCount | Number of deployment replicas | 1 |
| image.repository | Image full name including repository | |
| image.tag | Image tag | |
| digest.enabled | Enable/Disable digest of image to be used | false |
| digest.value | The digest value for the image | |
| image.imageSecrets | Image pull secrets | |
| image.pullPolicy | Image pull policy | Always |
| upgradeCompCheck | This parameter is intended to acknowledge a change in the system username within the container. Acknowledging this change is crucial before proceeding with the upgrade. | false |
| cdArgs.nodeName | Node name | cdnode |
| cdArgs.crtName | For non-production container deployments, self-signed certificates will
be used if a signed certificate is not specified in the values.yaml file
under cdArgs.crtName during deployment. |
|
| cdArgs.localCertLabel | Specify certificate import label in keystore | Client-API |
| cdArgs.cport | Client Port | 1363 |
| cdArgs.sport | Server Port | 1364 |
| saclConfig | Configuration for SACL | n |
| userNetmapConfigmap |
Set this field to reference a ConfigMap containing You can create the ConfigMap using the following command:
|
The default value is empty (""). |
| cdArgs.configDir | Directory for storing Connect:Direct configuration files | CDFILES |
| cdArgs.trustedAddr | [] | |
| cdArgs.keys.server | * MRLN SIMP Cd4Unix/Cd4Unix | |
| cdArgs.keys.client | * MRLN SIMP Cd4Unix/Cd4Unix | |
| spm.enabled | Enable/Disable Standard Privilege Mode feature mode in container | n |
| sum.enabled | Enable/Disable Ordinary User Mode feature | y |
| statLog.stdout | Enable/Disbale logging of statistics to stdout in container configure 'y' or 'n' as required. | n |
|
storageSecurity.fsGroup |
Group ID for File System Group | 45678 |
| storageSecurity.supplementalGroups | Group ID for Supplemental group | 65534 |
| persistence.enabled | To use persistent volume | true |
| pvClaim.existingClaimName | Provide name of existing PV claim to be used | |
| persistence.useDynamicProvisioning | To use storage classes to dynamically create PV | false |
| pvClaim.accessMode | Access mode for PV Claim | ReadWriteOnce |
| pvClaim.storageClassName | Storage class of the PVC | |
| pvClaim.selector.label | PV label key to bind this PVC | |
| pvClaim.selector.value | PV label value to bind this PVC | |
| pvClaim.size | Size of PVC volume | 100Mi |
| service.type | Kubernetes service type exposing ports | LoadBalancer |
| service.apiport.name | API port name | api |
| service.apiport.port | API port number | 1363 |
| service.apiport.protocol | Protocol for service | TCP |
| service.ftport.name | Server (File Transfer) Port name | ft |
| service.ftport.port | Server (File Transfer) Port number | 1364 |
| service.ftport.protocol | Protocol for service | TCP |
| service.loadBalancerIP | Provide the LoadBalancer IP | |
| service.loadBalancerSourceRanges | Provide Load Balancer Source IP ranges | [] |
| service.annotations | Provide the annotations for service | {} |
| service.externalTrafficPolicy | Specify if external Traffic policy is needed | |
| service.sessionAffinity | Specify session affinity type | ClientIP |
| service.externalIP | External IP for service discovery | [] |
| networkPolicyIngress.enabled | Enable/Disable the ingress policy | true |
| networkPolicyIngress.from | Provide from specification for network policy for ingress traffic | [] |
| networkPolicyEgress. enabled | Enable/Disable egress policy | true |
| networkPolicyEgress.acceptNetPolChange | This parameter is to acknowledge the Egress network policy introduction | false |
| secret.certSecretName | Name of secret resource of certificate files for dynamic provisioning | |
| secret.secretName | Secret name for Connect:Direct password store | |
| resources.limits.cpu | Container CPU limit | 500mi |
| resources.limits.memory | Container memory limit | 2000Mi |
| resources.limits.ephemeral-storage | Specify ephemeral storage limit size for pod's container | "5Gi" |
| resources.requests.cpu | Container CPU requested | 500m |
| resources.requests.memory | Container Memory requested | 2000Mi |
| resources.requests.ephemeral-storage | Specify ephemeral storage request size for pod's container | "3Gi" |
| serviceAccount.create | Enable/disable service account creation | true |
| serviceAccount.name | Name of Service Account to use for container | |
| extraVolumeMounts | Extra Volume mounts | |
| extraVolume | Extra volumes | |
| affinity.nodeAffinity.required DuringSchedulingIgnoredDuring Execution |
k8sPodSpec.nodeAffinity.required DuringSchedulingIgnoredDuring Execution |
|
| affinity.nodeAffinity.preferred DuringSchedulingIgnoredDuring Execution |
k8sPodSpec.nodeAffinity.preferred DuringSchedulingIgnoredDuring Execution |
|
| affinity.podAffinity.required DuringSchedulingIgnoredDuring Execution |
k8s
PodSpec.podAntiAffinity. requiredDuringSchedulingIgnored DuringExecution |
|
| affinity.podAffinity.preferred DuringSchedulingIgnoredDuring Execution |
k8sPodSpec.podAntiAffinity. preferredDuringScheduling IgnoredDuringExecution |
|
| affinity.podAntiAffinity.required DuringSchedulingIgnoredDuring Execution |
k8sPodSpec.podAntiAffinity. requiredDuringSchedulingIgnored DuringExecution |
|
| affinity.podAntiAffinity.preferred DuringSchedulingIgnoredDuring Execution |
k8sPodSpec.podAntiAffinity. preferredDuringSchedulingIgnored DuringExecution |
|
| tolerations | Tolerations let pods run on nodes that have matching taints. | |
| topologySpreadConstraints | Topology spread constraints help distribute pods evenly across nodes or zones. | |
| startupProbe.initialDelaySeconds | Initial delay for startup probe | 0 |
| startupProbe.timeoutSeconds | Timeout for startup probe | 5 |
| startupProbe.periodSeconds | Time period between startup probes | 5 |
| livenessProbe.initialDelaySeconds | Initial delay for liveness | 10 |
| livenessProbe.timeoutSeconds | Timeout for liveness | 5 |
| livenessProbe.periodSeconds | Time period for liveness | 10 |
| readinessProbe.initialDelaySeconds | Initial delays for readiness | 5 |
| readinessProbe.timeoutSeconds | Timeout for readiness | 5 |
| readinessProbe.periodSeconds | Time period for readiness | 10 |
| route.enabled | Route for OpenShift Enabled/Disabled | false |
| ldap.enabled | Enable/Disable LDAP configuration | false |
| ldap.host | LDAP server host | |
| ldap.port | LDAP port | |
| ldap.domain | LDAP Domain | |
| ldap.tls | Enable/Disable LDAP TLS | false |
| ldap.startTls | Specify true/false for ldap_id_use_start_tls | true |
| ldap.caCert | LDAP CA Certificate name | |
| ldap.tlsReqcert | Specify valid value - never, allow, try, demand, hard | never |
| ldap.defaultBindDn | Specify bind DN | |
| ldap.defaultAuthtokType | Specify type of the authentication token of the default bind DN | |
| ldap.defaultAuthtok | Specify authentication token of the default bind DN. Only clear text passwords are currently supported | |
| ldap.clientValidation | Enable/Disable LDAP Client Validation | false |
| ldap.clientCert | LDAP Client Certificate name | |
| ldap.clientKey | LDAP Client Certificate key name | |
| ldap.override_shell_enabled | Set to true to enable overriding the shell to Bash. Set to
false to disable it. |
false |
| extraLabels | Provide extra labels for all resources of this chart | {} |
| cdfa.fileAgentEnable | Specify y/n to Enable/Disable File Agent | n |
| hpa.enabled | Enables or disables Horizontal Pod Autoscaling (HPA) | true |
| hpa.minReplicas | Defines the minimum number of replicas that must be available at any time for the deployment. | 1 |
| hpa.maxReplicas | Specifies the maximum number of replicas to which the deployment can scale up. | 5 |
| hpa.averageCpuUtilization | Defines the target threshold for average CPU utilization (in percentage) that triggers scaling actions by the Horizontal Pod Autoscaler (HPA). | 60 |
| hpa.averageMemoryUtilization | Defines the target threshold for average memory utilization (in percentage) that triggers scaling actions by the Horizontal Pod Autoscaler (HPA). | 60 |
| hpa.stabilizationWindowSeconds | Specifies the wait period (in seconds) after a scaling action before the system evaluates and initiates another scaling event. | 180 |
| hpa.periodSeconds | Defines the interval (in seconds) at which the Horizontal Pod Autoscaler (HPA) gathers metrics to assess if scaling actions are required. | 15 |
| pdb.enabled | Enables or disables the Pod Disruption Budget (PDB). | true |
| pdb.minAvailable | Defines the minimum number of pods required to stay operational during voluntary disruptions to ensure availability. | 1 |
| terminationGracePeriodSeconds | This flag specifies the time (in seconds) allowed for the pod to shut down gracefully before it is forcibly terminated. | 30 |
Method 1: Override
Parameters Directly with CLI Using --set
This approach uses the
--set argument to specify each parameter that needs to be overridden at the
time of installation.
Example for Helm Version 2:
helm install --name <release-name> \
--set cdArgs.cport=9898 \
...
ibm-connect-direct-1.4.x.tgzExample for Helm Version 3:
helm install <release-name> \
--set cdArgs.cport=9898 \
...
ibm-connect-direct-1.4.x.tgz
Method 2: Use a YAML File with Configured Parameters
Alternatively,
specify configurable parameters in a values.yaml file and use it during
installation. This approach can be helpful for managing multiple configurations in one
place.
-
To obtain the
values.yamltemplate from the Helm chart:-
For Online Cluster:
helm inspect values ibm-helm/ibm-connect-direct > my-values.yaml -
For Offline Cluster:
helm inspect values <path-to-ibm-connect-direct-helm-chart> > my-values.yaml
-
-
Edit the
my-values.yamlfile to include your desired configuration values and use it with the Helm installation command:
Example for Helm Version 2:
helm install --name <release-name> -f my-values.yaml ... ibm-connect-direct-1.4.x.tgzExample for Helm Version 3:
helm install <release-name> -f my-values.yaml ... ibm-connect-direct-1.4.x.tgzMethod 1: YAML Configuration
For HostPath Configuration
extraVolumeMounts:-name:<name>mountPath:<pathinsidecontainer>extraVolume:-name:<samenameasinextraVolumeMounts>hostPath:path:<pathonhostmachine>type:DirectoryOrCreateextraVolumeMounts:-name:<name>mountPath:<pathinsidecontainer>extraVolume:-name:<samenameasinextraVolumeMounts>nfs:path:<NFSdatapath>server:<serverIP>Method
2: Using --set Flag in CLI
For HostPath
helm install --name <release-name> \
--set extraVolume[0].name=<name>,extraVolume[0].hostPath.path=<path on host machine>,extraVolume[0].hostPath.type="DirectoryOrCreate",extraVolumeMounts[0].name=<same name as in extraVolume>,extraVolumeMounts[0].mountPath=<path inside container> \
... ibm-connect-direct-1.4.x.tgzFor NFS Server
helm install --name <release-name> \
--set extraVolume[0].name=<name>,extraVolume[0].nfs.path=<NFS data path>,extraVolume[0].nfs.server=<NFS server IP>,extraVolumeMounts[0].name=<same name as in extraVolume>,extraVolumeMounts[0].mountPath=<path inside container> \
... ibm-connect-direct-1.4.x.tgz
If extra volumes are mounted, ensure the container user
(cdadmin/appuser) has appropriate read/write permissions. For
instance, if an extra NFS share has a POSIX group ID of 3535, add this group ID
as a supplemental group during deployment to ensure the container user is a member of this
group.
Affinity
The chart provides ways in form of node affinity, pod affinity and pod anti-affinity to configure advance pod scheduling in Kubernetes. See, Kubernetes documentation for details.
Kubernetes scheduling strategy
This section describes the Kubernetes pod scheduling configuration for the IBM Sterling Connect:Direct for UNIX Container deployment. The configuration ensures:
- Controlled scheduling on tainted nodes using tolerations
- High availability and resilience using topology spread constraints to distribute pods evenly across nodes
Tolerations let pods run on nodes that have taints. Taints prevent pods from running on certain nodes unless the pods explicitly tolerate them.
tolerations:
- key: "cd-taints"
operator: "Equal"
value: "node"
effect: "NoSchedule"
kubectl taint nodes <node-name> cd-taints=node:NoSchedule- Isolating specific workloads
- Allocating dedicated resources
- Controlling node usage and behavior
Topology spread constraints help distribute pods evenly across nodes or topology domains (such as zones or racks). This improves:
- High availability
- Fault tolerance
- Resource balancing
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/name: ibm-connect-direct
| Field | Description |
|---|---|
| maxSkew | Maximum difference in the number of pods between topology domains |
| topologyKey | Key defining the topology domain (for example,
kubernetes.io/hostname) |
| whenUnsatisfiable | Behavior when constraints cannot be satisfied (ScheduleAnyway or
DoNotSchedule) |
| labelSelector | Selects the pods to which the constraint applies |
Understanding LDAP deployment parameters
- When the LDAP authentication is enabled, the container startup script automatically updates
the
initparamconfiguration to support the PAM module. The following line is added toinitparam.cfg:ndm.pam:service=login: - The following default configuration file (/etc/sssd/sssd.conf) is
added to the image.
[domain/default] id_provider = ldap autofs_provider = ldap auth_provider = ldap chpass_provider = ldap ldap_uri = LDAP_PROTOCOL://LDAP_HOST:LDAP_PORT ldap_search_base = LDAP_DOMAIN ldap_id_use_start_tls = True ldap_tls_cacertdir = /etc/openldap/certs ldap_tls_cert = /etc/openldap/certs/LDAP_TLS_CERT_FILE ldap_tls_key = /etc/openldap/certs/LDAP_TLS_KEY_FILE cache_credentials = True ldap_tls_reqcert = allow - Description of the Certificates required for the configuration:
- Mount certificates inside CDU Container:
- Copy the certificates needed for LDAP configuration in the mapped directory which is used to share the Connect:Direct Unix secure plus certificates (CDFILES/cdcert directory by default).
- DNS resolution: If TLS is enabled and hostname of LDAP server is passed as “ldap.host”, then it must be ensured that the hostname is resolved inside the container. It is the responsibility of Cluster Administrator to ensure DNS resolution inside pod's container.
- Certificates creation and configuration: This section provides a sample way to generate the certificates:
- LDAP_CACERT - The root and all the intermediate CA certificates needs to be copied in one file.
- LDAP_CLIENT_CERT – The client certificate which the server must be able to validate.
- LDAP_CLIENT_KEY – The client certificate key.
- Use the below new parameters for LDAP configuration:
- ldap.enabled
- ldap.host
- ldap.port
- ldap.domain
- ldap.tls
- ldap.startTls
- ldap.caCert
- ldap.tlsReqcert
- ldap.defaultBindDn
- ldap.defaultAuthtokType
- ldap.defaultAuthtok
- ldap.clientValidation
- ldap.clientCert
- ldpa.clientKey
- ldap.override_shell_enabled
Note:IBM Sterling Connect:Direct for UNIX Container uses sssd utility for communication with LDAP and the connection between sssd and LDAP server is required to be encrypted.
TLS configuration is mandatory for user authentication which is required for file transfer using IBM Connect:Direct for UNIX.
By default, LDAP integration with sssd in the IBM Sterling Connect:Direct for UNIX Container uses
/bin/bashas the login shell. If your system or LDAP users are configured to use a different shell, switch to/bin/bashbefore accessing the container. Otherwise, login may fail. To override the default shell, set theoverride_shell_enabledparameter totrue. - Mount certificates inside CDU Container:
Network Policy Change
Out of the box Network Policies
IBM Sterling Connect:Direct for UNIX Container comes with predefined network policies based on mandatory security guidelines. By default, all outbound communication is restricted, permitting only intra-cluster communication.
-
Deny all Egress Traffic
-
Allow Egress Traffic within the Cluster
Defining Custom Network Policy
networkPolicyEgress:
enabled: true
acceptNetPolChange: false
# write your custom egress policy here for to spec
to: []
#- namespaceSelector:
# matchLabels:
# name: my-label-to-match
# podSelector:
# matchLabels:
# app.kubernetes.io/name: "connectdirect"
#- podSelector:
# matchLabels:
# role: server
#- ipBlock:
# cidr: <IP Address>/<block size>
# except:
# - <IP Address>/<block size>
#ports:
#- protocol: TCP
# port: 1364
# endPort: 11364In the latest release, a new Helm parameter,
networkPolicyEgress.acceptNetPolChange, has been introduced. To proceed with
the Helm chart upgrade, this parameter must be set to true. By default, it is set to false,
and the upgrade won't proceed without this change.
Before this release, there was no Egress Network Policy. The new implementation might impact
outbound traffic to external destinations. To mitigate this, a custom policy allowing external
traffic needs to be created. Once this policy is in place, you can set the
acceptNetPolChange parameter to true and proceed with the upgrade.
If you want to disable the network policy altogether, you can set
networkPolicyEgress.enabled to false. Adjust these parameters based on your
network and security requirements.
Refer to the Table 3 table containing the supported configurable parameters in the Helm chart.
Installing IBM Connect:Direct for Unix using Helm chart
helm install --name my-release --set license=true,image.repository=<reponame> image.tag=<image tag>,cdArgs.crtName=<certificate name>,image.imageSecrets=<image pull secret>,secret.secretName=<C:D secret name> ibm-connect-direct-1.4.x.tgz
or
helm install --name my-release ibm-connect-direct-1.4.x.tgz -f my-values.yaml
helm install my-release --set license=true,image.repository=<reponame> image.tag=<image tag>,cdArgs.crtName=<certificate name>,image.imageSecrets=<image pull secret>,secret.secretName=<C:D secret name> ibm-connect-direct-1.4.x.tgz
or
helm install my-release ibm-connect-direct-1.4.x.tgz -f my-values.yaml
This command deploys ibm-connect-direct-1.4.x.tgz chart on the Kubernetes cluster using the default configuration. Creating storage for Data Persistence lists parameters that can be configured at deployment.
| Parameter | Description | Default Value |
|---|---|---|
| license | License agreement for IBM Sterling Connect:Direct for UNIX Container | false |
| image.repository | Image full name including repository | |
| image.tag | Image tag | |
| cdArgs.crtName | For non-production deployments, this field is not mandatory. If a certificate is not provided during deployment, a self-signed certificate will be used. | |
| image.imageSecrets | Image pull secrets | |
| secret.secretName | Secret name for Connect:Direct password store |
Validating the Installation
After the deployment procedure is complete, you should validate the deployment to ensure that everything is working according to your needs. The deployment may take approximately 4-5 minutes to complete.
-
Check the Helm chart release status by invoking the following command and verify that the
STATUSisDEPLOYED:helm status my-release - Wait for the pod to be ready. To verify the pods status (READY) use the
dashboard or through the command line interface by invoking the following
command:
kubectl get pods -l release my-release -n my-namespace -o wide - To view the service and ports exposed to enable communication in a pod invoke
the following
command:
kubectl get svc -l release= my-release -n my-namespace -o wideThe screen output displays the external IP and exposed ports under EXTERNAL-IP and PORT(S) column respectively. If external LoadBalancer is not present, refer Master node IP as external IP.
Exposed Services
If required, this chart can create a service of ClusterIP for communication within the cluster. This type can be changed while installing chart using service.type key defined in values.yaml. There are two ports where IBM Connect:Direct processes run. API port (1363) and FT port (1364), whose values can be updated during chart installation using service.apiport.port or service.ftport.port.
DIME and DARE Security Considerations
This topic provides security recommendations for setting up Data In Motion Encryption (DIME) and Data At Rest Encryption (DARE). It is intended to help you create a secure implementation of the application.
- All sensitive application data at rest is stored in binary format so user cannot decrypt it. This chart does not support encryption of user data at rest by default. Administrator can configure storage encryption to encrypt all data at rest.
- Data in motion is encrypted using transport layer security (TLS 1.3). For more information see, Secure Plus.