Setting up installation environment variables

The commands for installing and upgrading IBM Cloud Pak® for Data use variables with the format ${VARIABLE_NAME}. You can create a script to automatically export the appropriate values as environment variables before you run the installation commands. After you source the script, you will be able to copy most install and upgrade commands from the documentation and run them without making any changes.

Installation phase
You are not here.Setting up a client workstation
You are not here. Setting up a cluster
You are here icon. Collecting required information
You are not here. Preparing to run installs in a restricted network
You are not here. Preparing to run installs from a private container registry
You are not here. Preparing the cluster for Cloud Pak for Data
You are not here. Preparing to install an instance of Cloud Pak for Data
You are not here. Installing an instance of Cloud Pak for Data
Who needs to complete this task?

Cloud Pak for Data operations team The IBM Cloud Pak for Data operations team should work with the cluster administrator to compile information about the cluster where Cloud Pak for Data will be installed.

This information should be shared with any users who will run installation or upgrade commands that require information about your environment, such as information about your Red Hat® OpenShift® Container Platform cluster or your private container registry.

When do you need to complete this task?

Repeat as needed Create at least one environment variable script. You might need to create multiple scripts depending on your use case.

Before you begin

Before you create an environment variables script, consider the use case that you need to support:

Repeatable deployments across clusters
If you want to create repeatable deployments across clusters, you can:
Option Recommended if you want to... Additional considerations
Re-use the same script Improve the consistency of deployments across your environments. You must ensure that you modify the Cluster variables before you run an installation.
Create multiple scripts Avoid modifying the environment variables script before you run an installation. You must ensure that a change in one script is populated to the related scripts, if appropriate. For example, if you update the value of the PRIVATE_REGISTRY_PULL_PASSWORD variable in one script, you must update the variable in any related scripts.

In addition, clearly name each script to ensure that you source the correct variables before you run installation or upgrade commands.

Multiple deployments on the same cluster
If you want to create multiple deployments on the same cluster, you can:
Option Recommended if you want to... Additional considerations
Re-use the same script Create standardized deployments. You must ensure that you update the Projects variables before you run the installation.
Create multiple scripts
  • Deploy different services in each instance of Cloud Pak for Data.
  • Avoid modifying the environment variables script before you run an installation.
Clearly name each script to ensure that you source the correct variables before you run installation or upgrade commands.
Tip: If multiple people are working together to complete the installation, you should share a copy of the appropriate files with each user. Each user can edit the scripts to supply their own credentials and source the script on their own workstation.

Creating an environment variables file

  1. Copy the following example to a text editor on your local file system:
    #===============================================================================
    # Cloud Pak for Data installation variables
    #===============================================================================
    
    # ------------------------------------------------------------------------------
    # Client workstation 
    # ------------------------------------------------------------------------------
    # Set the following variables if you want to override the default behavior of the Cloud Pak for Data CLI.
    #
    # To export these variables, you must uncomment each command in this section.
    
    # export CPD_CLI_MANAGE_WORKSPACE=<enter a fully qualified directory>
    # export OLM_UTILS_LAUNCH_ARGS=<enter launch arguments>
    
    
    # ------------------------------------------------------------------------------
    # Cluster
    # ------------------------------------------------------------------------------
    
    export OCP_URL=<enter your Red Hat OpenShift Container Platform URL>
    export OPENSHIFT_TYPE=<enter your deployment type>
    export IMAGE_ARCH=<enter your cluster architecture>
    # export OCP_USERNAME=<enter your username>
    # export OCP_PASSWORD=<enter your password>
    # export OCP_TOKEN=<enter your token>
    export SERVER_ARGUMENTS="--server=${OCP_URL}"
    # export LOGIN_ARGUMENTS="--username=${OCP_USERNAME} --password=${OCP_PASSWORD}"
    # export LOGIN_ARGUMENTS="--token=${OCP_TOKEN}"
    export CPDM_OC_LOGIN="cpd-cli manage login-to-ocp ${SERVER_ARGUMENTS} ${LOGIN_ARGUMENTS}"
    export OC_LOGIN="oc login ${OCP_URL} ${LOGIN_ARGUMENTS}"
    
    
    # ------------------------------------------------------------------------------
    # Projects
    # ------------------------------------------------------------------------------
    
    export PROJECT_CERT_MANAGER=<enter your certificate manager project>
    export PROJECT_LICENSE_SERVICE=<enter your License Service project>
    export PROJECT_SCHEDULING_SERVICE=<enter your scheduling service project>
    # export PROJECT_IBM_EVENTS=<enter your IBM Events Operator project>
    # export PROJECT_PRIVILEGED_MONITORING_SERVICE=<enter your privileged monitoring service project>
    export PROJECT_CPD_INST_OPERATORS=<enter your Cloud Pak for Data operator project>
    export PROJECT_CPD_INST_OPERANDS=<enter your Cloud Pak for Data operand project>
    # export PROJECT_CPD_INSTANCE_TETHERED=<enter your tethered project>
    # export PROJECT_CPD_INSTANCE_TETHERED_LIST=<a comma-separated list of tethered projects>
    
    
    
    # ------------------------------------------------------------------------------
    # Storage
    # ------------------------------------------------------------------------------
    
    export STG_CLASS_BLOCK=<RWO-storage-class-name>
    export STG_CLASS_FILE=<RWX-storage-class-name>
    
    # ------------------------------------------------------------------------------
    # IBM Entitled Registry
    # ------------------------------------------------------------------------------
    
    export IBM_ENTITLEMENT_KEY=<enter your IBM entitlement API key>
    
    
    # ------------------------------------------------------------------------------
    # Private container registry
    # ------------------------------------------------------------------------------
    # Set the following variables if you mirror images to a private container registry.
    #
    # To export these variables, you must uncomment each command in this section.
    
    # export PRIVATE_REGISTRY_LOCATION=<enter the location of your private container registry>
    # export PRIVATE_REGISTRY_PUSH_USER=<enter the username of a user that can push to the registry>
    # export PRIVATE_REGISTRY_PUSH_PASSWORD=<enter the password of the user that can push to the registry>
    # export PRIVATE_REGISTRY_PULL_USER=<enter the username of a user that can pull from the registry>
    # export PRIVATE_REGISTRY_PULL_PASSWORD=<enter the password of the user that can pull from the registry>
    
    
    # ------------------------------------------------------------------------------
    # Cloud Pak for Data version
    # ------------------------------------------------------------------------------
    
    export VERSION=4.8.7
    
    
    # ------------------------------------------------------------------------------
    # Components
    # ------------------------------------------------------------------------------
    
    export COMPONENTS=ibm-cert-manager,ibm-licensing,scheduler,cpfs,cpd_platform
    # export COMPONENTS_TO_SKIP=<component-ID-1>,<component-ID-2>
    
    
    # ------------------------------------------------------------------------------
    # watsonx Orchestrate
    # ------------------------------------------------------------------------------
    # export PROJECT_IBM_APP_CONNECT=<enter your IBM App Connect in containers project>
    # export AC_CASE_VERSION=<version>
    # export AC_CHANNEL_VERSION=<version>
  2. Update each section in the script for your environment. See the following sections to learn about the variables and valid values in each section of the script:
  3. Save the file as a shell script. For example, save the file as cpd_vars.sh.
  4. Confirm that the script does not contain any errors. For example, if you named the script cpd_vars.sh, run:
    bash ./cpd_vars.sh
  5. If you stored passwords in the file, prevent others from reading the file. For example, if you named the script cpd_vars.sh, run:
    chmod 700 cpd_vars.sh

Sourcing the environment variables

Save a copy of the script to your workstation and run it from a bash prompt before you the run installation and upgrade commands. The script exports the environment variables to your command-line session.

Important: You must re-run the script each time you open a new bash prompt.
  1. Change to the directory where you saved the script.
  2. Source the environment variables. For example, if you named the script cpd_vars.sh, run:
    source ./cpd_vars.sh

Client workstation

The variables in the Client workstation section of the script specify information about how the cpd-cli manage plug-in runs on the client workstation.

Variable Description
CPD_CLI_MANAGE_WORKSPACE The directory where you want to store files, such as CASE files, that are used by cpd-cli manage commands.

The cpd-cli creates a directory named work inside of this directory.

By default, the first time you run a cpd-cli manage command, the cpd-cli automatically creates the cpd-cli-workspace/olm-utils-workspace/work directory.

The location of the directory depends on several factors:

  • If you made the cpd-cli executable from any directory, the directory is created in the directory where you run the cpd-cli commands.
  • If you did not make the cpd-cli executable from any directory, the directory is created in the directory where the cpd-cli is installed.

You can set the CPD_CLI_MANAGE_WORKSPACE environment variable to override the default location.

The CPD_CLI_MANAGE_WORKSPACE environment variable is especially useful if you made the cpd-cli executable from any directory. When you set the environment variable, it ensures that the files are located in one directory.

To use the CPD_CLI_MANAGE_WORKSPACE variable, you must uncomment the export command in the environment variables file.

Default value
No default value. The directory is created based on the factors described in the preceding text.
Valid values
The fully qualified path where you want the cpd-cli to create the work directory. For example, if you specify /root/cpd-cli/, the cpd-cli manage plug-in stores files in the /root/cpd-cli/work directory.
OLM_UTILS_LAUNCH_ARGS A set of arguments that you pass to the olm-utils runtime container.

You can use the OLM_UTILS_LAUNCH_ARGS environment variable to mount certificates that the cpd-cli must use in the cpd-cli container.

Mount CA certificates

You can mount CA certificates if you need to reach an external HTTPS endpoint that uses a self-signed certificate.

Tip: Typically the CA certificates are in the /etc/pki/ca-trust directory on the workstation. If you need additional information on adding certificates to a workstation, run:
man update-ca-trust
Determine the correct argument for your environment:
  • If the certificates on the client workstation are in the /etc/pki/ca-trust directory, the argument is:

    " -v /etc/pki/ca-trust:/etc/pki/ca-trust"

  • If the certificates on the client workstation are in a different directory, replace <ca-loc> with the appropriate location on the client workstation:

    " -v <ca-loc>:/etc/pki/ca-trust"

Mount Kubernetes certificates
You can mount Kubernetes certificates if you need to use a certificate to connect to the Kubernetes API server.

The argument depends on the location of the certificates on the client workstation. Replace <k8-loc> with the appropriate location on the client workstation:

" -v <k8-loc>:/etc/k8scert --env K8S_AUTH_SSL_CA_CERT=/etc/k8scert"

To use the OLM_UTILS_LAUNCH_ARGS variable, you must uncomment the export command in the environment variables file.

Default value
No default value.
Valid values
The valid values depend on the arguments that you need to pass to the OLM_UTILS_LAUNCH_ARGS environment variable.
  • To pass CA certificates, specify:

    " -v <ca-loc>:/etc/pki/ca-trust"

  • To pass Kubernetes certificates, specify:

    " -v <k8-loc>:/etc/k8scert --env K8S_AUTH_SSL_CA_CERT=/etc/k8scert"

  • To pass both CA certificates and Kubernetes certificates, specify:

    " -v <ca-loc>:/etc/pki/ca-trust -v <k8-loc>:/etc/k8scert --env K8S_AUTH_SSL_CA_CERT=/etc/k8scert"

Cluster

The variables in the Cluster section of the script specify information about your Red Hat OpenShift Container Platform cluster.

Variable Description
OCP_URL The URL of the Red Hat OpenShift Container Platform server. For example, https://openshift1.example.com:8443.
Default value
There is no default value.
Valid values
Specify the URL of your Red Hat OpenShift Container Platform server.
OPENSHIFT_TYPE The type of Red Hat OpenShift Container Platform cluster that you are running.
Default value
self-managed
Valid values
aro
Specify aro if you are running Azure Red Hat OpenShift (ARO), the managed OpenShift offering on Microsoft Azure.
roks
Specify roks if you are running Red Hat OpenShift on IBM Cloud®, the managed OpenShift offering on IBM® Cloud.
rosa
Specify rosa if you are running Red Hat OpenShift Service on AWS (ROSA), the managed OpenShift offering on Amazon Web Services.
self-managed
Specify self-managed if you are running self-managed OpenShift on:
  • On-premises infrastructure
  • AWS infrastructure
  • IBM Cloud infrastructure
  • Microsoft Azure infrastructure
IMAGE_ARCH The architecture of your Red Hat OpenShift Container Platform Cluster hardware.
Default value
There is no default value.
Valid values
amd64
Specify amd64 if your Red Hat OpenShift Container Platform cluster runs on x86-64 hardware.
ppc64le
Specify ppc64le if your Red Hat OpenShift Container Platform cluster runs on Power® hardware.
s390x
Specify s390x if your Red Hat OpenShift Container Platform cluster runs on Z hardware.
OCP_USERNAME The username that you use to authenticate to your cluster. You must have sufficient privileges to complete each installation or upgrade task.

To use the OCP_USERNAME variable, you must uncomment the export command in the environment variables file.

Tip: It is recommended that you prevent other users from reading the contents of the environment variable script by running chmod 700. However, if you still have concerns about storing your OpenShift credentials in this file, you can:
  • Enter the credentials directly instead of using the environment variable in the commands.
  • Manually export the credentials before you run the commands.
OCP_PASSWORD The password that you use to authenticate to your cluster.

To use the OCP_PASSWORD variable, you must uncomment the export command in the environment variables file.

OCP_TOKEN

You can use a token instead of your user name and password to log in to your Red Hat OpenShift Container Platform cluster.

You can get your token from the Red Hat OpenShift Container Platform web console. From the username drop-down menu, select Copy login command. When prompted, click Display Token.

To use the OCP_TOKEN variable, you must uncomment the export command in the environment variables file.

SERVER_ARGUMENTS The server argument to pass to log in to the cluster.

Do not modify this export command.

The SERVER_ARGUMENTS environment variable depends on the OCP_URL environment variable.

LOGIN_ARGUMENTS The credential arguments to pass to log in to the cluster.
The LOGIN_ARGUMENTS environment variable depends on the credentials that you use to log in.
A username and password
If you specify a username and password, the LOGIN_ARGUMENTS environment variable depends on the following environment variables:
  • OCP_USERNAME
  • OCP_PASSWORD

If you are working from the sample environment variables script, uncomment the export LOGIN_ARGUMENTS entry that includes the username and password entries.

Do not modify this export command.

A token
If you specify a token, the LOGIN_ARGUMENTS environment variable depends on the OCP_TOKEN environment variable.

If you are working from the sample environment variables script, uncomment the LOGIN_ARGUMENTS entry that includes the token entry.

Do not modify this export command.

CPDM_OC_LOGIN Shortcut for the cpd-cli manage login-to-ocp command.

Do not modify this export command.

The CPDM_OC_LOGIN environment variable depends on the following environment variables:
  • SERVER_ARGUMENTS
  • LOGIN_ARGUMENTS
OC_LOGIN Shortcut for the oc login command.

Do not modify this export command.

The OC_LOGIN environment variable depends on the following environment variables:
  • OCP_URL
  • LOGIN_ARGUMENTS

Projects

The variables in the Projects section of the script specify where the components that comprise Cloud Pak for Data are installed.

Need more information about projects? See:
Variable Description
PROJECT_CERT_MANAGER The project for the Certificate Manager operator.
Default value
ibm-cert-manager
Valid values
You can use any Red Hat OpenShift project, but do not co-locate IBM Certificate manager with other software.
PROJECT_LICENSE_SERVICE The project for the Licensing Service operator.
Default value
ibm-licensing
Valid values
You can use any Red Hat OpenShift project, but do not co-locate IBM Certificate manager with other software.
PROJECT_SCHEDULING_SERVICE The project for the scheduling service.
Default value
There is no default value.
Valid values
You can use any Red Hat OpenShift project; however, it is strongly recommended that you use ibm-cpd-scheduler. Do not co-locate the scheduling service with other software.
PROJECT_IBM_EVENTS
watsonx Assistant users only. The project where you want to install the IBM Events Operator or the project where a cluster-wide instance of the IBM Events Operator is already installed.

To use the PROJECT_IBM_EVENTS variable, you must uncomment the export command in the environment variables file.

Default value
ibm-knative-events
Valid values
Use either:
  • ibm-knative-events

    Use ibm-knative-events if you do not have an existing cluster-scoped instance of the IBM Events Operator.

  • The project where a cluster-scoped instance of the IBM Events Operator is installed.
PROJECT_PRIVILEGED_MONITORING_SERVICE
The OpenShift project for the IBM Cloud Pak for Data privileged monitoring service.

The privileged monitoring service is optional, but provides additional monitoring and logging information for Cloud Pak for Data.

To use the PROJECT_PRIVILEGED_MONITORING_SERVICE variable, you must uncomment the export command in the environment variables file.

Default value
There is no default value.
Valid values
You can use any Red Hat OpenShift project; however, it is strongly recommended that you use ibm-cpd-privileged. Do not co-locate the privileged monitoring service with other software.
PROJECT_CPD_INST_OPERATORS The project where you want to install the operators for this instance of Cloud Pak for Data.
Default value
There is no default value.
Valid values
You can use any Red Hat OpenShift project, but do not co-locate the Cloud Pak for Data operators with other software.
PROJECT_CPD_INST_OPERANDS The project for the IBM Cloud Pak for Data control plane and services.
Default value
There is no default value.
Valid values
You can use any Red Hat OpenShift project, but do not co-locate the control plane or services with other software.
PROJECT_CPD_INSTANCE_TETHERED A project that is tethered to the project where the Cloud Pak for Data control plane is installed.

This variable is required only if you plan to install a service that supports deploying service instances into a tethered project.

To use the PROJECT_CPD_INSTANCE_TETHERED variable, you must uncomment the export command in the environment variables file.

Default value
There is no default value.
Valid values
You can use any Red Hat OpenShift project, but do not co-locate other software in the project.
PROJECT_CPD_INSTANCE_TETHERED_LIST
A comma-separated list of projects that are tethered to the project where the Cloud Pak for Data control plane is installed.

This variable is required only if you plan to install a service that supports deploying service instances into a tethered project.

To use the PROJECT_CPD_INSTANCE_TETHERED_LIST variable, you must uncomment the export command in the environment variables file.

Default value
There is no default value.
Valid values
A comma-separated list of project names. For example:

cpd-instance-1-t1,cpd-instance-1-t2,cpd-instance-1-t3

Storage

The variables in the Storage section of the script specify the storage classes that the installation should use.

Variable Description
STG_CLASS_BLOCK The name of a block storage class on a supported storage option.
Default value
There is no default value.
Valid values
Specify the name of a storage class that points to block storage (storage that supports ReadWriteOnce, also called RWO, access).
The following list provides the recommended storage classes for the supported storage options. If you use different storage classes, identify an equivalent storage class on the cluster.
  • OpenShift Data Foundation: ocs-storagecluster-ceph-rbd
  • IBM Storage Fusion Data Foundation: ocs-storagecluster-ceph-rbd
  • IBM Storage Fusion Global Data Platform: Either of the following storage classes, depending on your environment:
    • ibm-spectrum-scale-sc
    • ibm-storage-fusion-cp-sc
  • IBM Storage Scale Container Native: ibm-spectrum-scale-sc
  • Portworx: portworx-metastoredb-sc
  • NFS: managed-nfs-storage
  • Amazon Elastic Block Store: Either of the following storage classes, depending on your environment:
    • gp2-csi
    • gp3-csi
STG_CLASS_FILE The name of a file storage class on a supported storage option.
Default value
There is no default value.
Valid values
Specify the name of a storage class that points to file storage (storage that supports ReadWriteMany, also called RWX, access).
The following list provides the recommended storage classes for the supported storage options. If you use different storage classes, identify an equivalent storage class on the cluster.
  • OpenShift Data Foundation: ocs-storagecluster-cephfs
  • IBM Storage Fusion Data Foundation: ocs-storagecluster-cephfs
  • IBM Storage Fusion Global Data Platform: Either of the following storage classes, depending on your environment:
    • ibm-spectrum-scale-sc
    • ibm-storage-fusion-cp-sc
  • IBM Storage Scale Container Native: ibm-spectrum-scale-sc
  • Portworx: portworx-rwx-gp3-sc
  • NFS: managed-nfs-storage
  • Amazon Elastic File System: efs-nfs-client

IBM Entitled Registry

The variables in the IBM Entitled Registry section of the script enable you to connect to the IBM Entitled Registry and access the Cloud Pak for Data software images that you are entitled to.

Depending on whether you pull images from the IBM Entitled Registry or from a private container registry, the variables might also be used to configure the global image pull secret.

Need more information about the IBM Entitled Registry? See Obtaining your IBM entitlement API key for IBM Cloud Pak for Data.

Variable Description
IBM_ENTITLEMENT_KEY The entitlement API key that is associated with your My IBM account.
Default value
There is no default value.
Valid values
Specify your IBM entitlement API key.

Private container registry

It is strongly recommended that you use a private container registry. The variables in the Private container registry section are required only if you mirror images to a private container registry.

The variables in the Private container registry section of the script enable you to mirror images from the IBM Entitled Registry to the private container registry.

Need more information about private container registries? See Private container registry requirements.

Variable What to specify
PRIVATE_REGISTRY_LOCATION The location of the private container registry.

To use the PRIVATE_REGISTRY_LOCATION variable, you must uncomment the export command in the environment variables file.

Default value
There is no default value.
Valid values
Specify the hostname or IP address of the private container registry. Keep the following guidance in mind:
  • Do not specify http:// or https://.
  • If the registry is running on port 80 or 443 you can omit the port. However, if the registry is running on a different port, you must specify the port.
PRIVATE_REGISTRY_PUSH_USER The username of a user who has the required privileges to push images to the private container registry.

To use the PRIVATE_REGISTRY_PUSH_USER variable, you must uncomment the export command in the environment variables file.

Default value
There is no default value.
Valid values
Specify a username.
PRIVATE_REGISTRY_PUSH_PASSWORD The password of the user who has the required privileges to push images to the private container registry.

To use the PRIVATE_REGISTRY_PUSH_PASSWORD variable, you must uncomment the export command in the environment variables file.

Tip: It is recommended that you prevent other users from reading the contents of the environment variable script by running chmod 700. However, if you still have concerns about storing passwords in this file, you can:
  • Enter the password directly instead of using the environment variable in the commands.
  • Manually export the password before you run the commands.
Default value
There is no default value.
Valid values
Specify the password associated with the username.
PRIVATE_REGISTRY_PULL_USER The username of a user who has the required privileges to pull images from the private container registry.

To use the PRIVATE_REGISTRY_PULL_USER variable, you must uncomment the export command in the environment variables file.

Default value
There is no default value.
Valid values
Specify a username.
PRIVATE_REGISTRY_PULL_PASSWORD The password of the user who has the required privileges to pull images from the private container registry.

To use the PRIVATE_REGISTRY_PULL_PASSWORD variable, you must uncomment the export command in the environment variables file.

Tip: It is recommended that you prevent other users from reading the contents of the environment variable script by running chmod 700. However, if you still have concerns about storing passwords in this file, you can:
  • Enter the password directly instead of using the environment variable in the commands.
  • Manually export the password before you run the commands.
Default value
There is no default value.
Valid values
Specify the password associated with the username.

Cloud Pak for Data version

The variable in the Cloud Pak for Data version section specifies which version of Cloud Pak for Data to install or upgrade to.

Remember: All of the components in an instance must be installed at the same version.
Variable Description
VERSION The version of the Cloud Pak for Data software to install.
Default value
4.8.7
Valid values
  • 4.8.0
  • 4.8.1
  • 4.8.2
  • 4.8.3
  • 4.8.4
  • 4.8.5
  • 4.8.6
  • 4.8.7

Components

The variables in the Components section help you manage the software that is associated with an instance of Cloud Pak for Data.

For example, you can use the COMPONENTS environment variable to ensure that you specify the same components when you:
  • Mirror images to a private container registry
  • Create the operators for an instance of Cloud Pak for Data
  • Install the software for an instance of Cloud Pak for Data
Variable Description
COMPONENTS A comma separated list of the components that you want to install or upgrade.
Default value
By default, the list includes the required and recommended components:
  • ibm-cert-manager
  • ibm-licensing
  • cpfs
  • scheduler
  • cpd_platform
The components are separated by commas without spaces. For example:
ibm-cert-manager,ibm-licensing,...
You can remove the scheduler component if you don't plan to use the following features and services:
  • The quota enforcement feature in Cloud Pak for Data
  • The node scoring feature for pod placement
  • The Watson™ Machine Learning Accelerator service
  • Priority scheduling and co-scheduling in the Analytics Engine powered by Apache Spark service
Important: Don't remove any other components from the COMPONENTS environment variable. These components are required for all installations.
Valid values
Review the guidance in Determining which IBM Cloud Pak for Data components to install to determine which service component IDs to specify.
Remember: IBM watsonx.ai and IBM watsonx.data cannot be co-located with Cloud Pak for Data Enterprise Edition, Cloud Pak for Data Standard Edition, or services included in other cartridge licenses.
However, you can co-locate these solutions with services that are included in the watsonx license that you purchased:
Services included in the IBM watsonx.ai license:
  • Analytics Engine powered by Apache Spark
  • Decision Optimization
  • Execution Engine for Apache Hadoop
  • SPSS® Modeler
  • Synthetic Data Generator
  • RStudio® Server Runtimes
  • Watson Machine Learning Accelerator
  • Watson Pipelines
  • Watson Studio Runtimes

Services included in the IBM watsonx.data license:

Analytics Engine powered by Apache Spark (4.8.5 or later Starting in Version 4.8.5, Analytics Engine powered by Apache Spark is automatically installed when you install watsonx.data.)


You do not need to specify services that are automatically installed by IBM watsonx.ai or IBM watsonx.data

COMPONENTS_TO_SKIP A comma-separated list of components that are already installed on the cluster with cluster-scoped operators.

If a component has a cluster-scoped operator, you can use this environment variable to prevent the cpd-cli from creating a namespace-scoped instance of the operator.

For example, if you have a cluster-scoped instance of the Cloud Native PostgreSQL operator, you can prevent the cpd-cli from creating a namespace-scoped operator if another component has a dependency on Cloud Native PostgreSQL.

To use the COMPONENTS_TO_SKIP variable, you must uncomment the export command in the environment variables file.

Default value
There is no default value.
Valid values
Set this environment variable only if you have cluster-scoped operators for one or more of the components in Shared components (automatically installed dependencies).

watsonx Orchestrate

4.8.4 or later The variables in the watsonx Orchestrate® section specify information about the IBM App Connect in containers software that you must install if you plan to install watsonx Orchestrate.

Variable Description
PROJECT_IBM_APP_CONNECT
watsonx Orchestrate users only. The project where you want to install IBM App Connect in containers.
Important:
Each instance of watsonx Orchestrate requires a dedicated instance of App Connect. You must install an instance of App Connect for each instance of watsonx Orchestrate that you plan to install.

To use the PROJECT_IBM_APP_CONNECT variable, you must uncomment the export command in the environment variables file.

Default value
There is no default value.
Valid values
You can use any Red Hat OpenShift project. However, if you plan to install multiple instances of watsonx Orchestrate, it is strongly recommended that you use a naming convention that helps you identify which instance of watsonx Orchestrate the instance of App Connect is associated with.
AC_CASE_VERSION Set the AC_CASE_VERSION environment variable based on the version of Cloud Pak for Data that you plan to install.

To use the AC_CASE_VERSION variable, you must uncomment the export command in the environment variables file.

Default value
There is no default value.
Valid values
Use the following list to determine the appropriate value:
  • For Version 4.8.7, specify 11.4.0.
  • For Version 4.8.6, specify 11.4.0.
  • For Version 4.8.5, specify 11.4.0.
  • For Version 4.8.4, specify 11.3.0.
AC_CHANNEL_VERSION Set the AC_CHANNEL_VERSION environment variable based on the version of Cloud Pak for Data that you plan to install.

To use the AC_CHANNEL_VERSION variable, you must uncomment the export command in the environment variables file.

Default value
There is no default value.
Valid values
Use the following list to determine the appropriate value:
  • For Version 4.8.7, specify v11.4.
  • For Version 4.8.6, specify v11.4.
  • For Version 4.8.5, specify v11.4.
  • For Version 4.8.4, specify v11.3.

What to do next

Now that you've set up your installation environment variables, you're ready to complete Preparing to run IBM Cloud Pak for Data installs from a private container registry.