Installing DataStage
A project administrator can install DataStage on IBM Cloud Pak® for Data.
- What permissions do you need to complete this task?
- The permissions that you need depend on which tasks you must complete:
- To install the DataStage operators, you must have the appropriate permissions to
create operators and you must be an administrator of the project where the Cloud Pak for Data operators are installed. This project is identified by
the
${PROJECT_CPD_OPS}
environment variable. - To install DataStage, you must be an administrator of the project where you will
install DataStage. This project is identified by the
${PROJECT_CPD_INSTANCE}
environment variable.
- To install the DataStage operators, you must have the appropriate permissions to
create operators and you must be an administrator of the project where the Cloud Pak for Data operators are installed. This project is identified by
the
- When do you need to complete this task?
- If you didn't install DataStage when you installed the platform, you can complete
this task to add DataStage to your environment.
If you want to install all of the Cloud Pak for Data components at the same time, follow the process in Installing the platform and services instead.
Important: All of the Cloud Pak for Data components in a deployment must be installed at the same release.
Information you need to complete this task
Review the following information before you install DataStage:
- Environment variables
- The commands in this task use environment variables so that you can run the commands exactly as
written.
- If you don't have the script that defines the environment variables, see Setting up installation environment variables.
- To use the environment variables from the script, you must source the environment variables
before you run the commands in this task, for
example:
source ./cpd_vars.sh
- Security context constraint requirements
- DataStage uses the
restricted
security context constraint (SCC).
- Installation location
- DataStage must be installed in the same project
(namespace) as the Cloud Pak for Data control
plane. This project is
identified by the
${PROJECT_CPD_INSTANCE}
environment variable.
- Common core services
- DataStage requires the Cloud Pak for Data
common core services.
If the common core services are not installed in the project where you plan to install DataStage, the common core services are automatically installed when you install DataStage. This increases the amount of time the installation takes to complete.
- Storage requirements
- You must tell DataStage what storage to use. The following storage classes are recommended. However, if you don't use these storage classes on your cluster, ensure that you specify a storage class with an equivalent definition.
Before you begin
This task assumes that the following prerequisites are met:
Prerequisite | Where to find more information |
---|---|
The cluster meets the minimum requirements for installing DataStage. | If this task is not complete, see System requirements. |
The workstation from which you will run the installation is set up as a client workstation
and includes the following command-line interfaces:
|
If this task is not complete, see Setting up a client workstation. |
The Cloud Pak for Data control plane is installed. | If this task is not complete, see Installing the platform and services. |
For environments that use a private container registry, such as air-gapped environments, the DataStage software images are mirrored to the private container registry. | If this task is not complete, see Mirroring images to a private container registry. |
Procedure
Complete the following tasks to install DataStage:
Logging in to the cluster
To run cpd-cli
manage
commands, you must log in to the cluster.
To log in to the cluster:
-
Run the
cpd-cli manage login-to-ocp
command to log in to the cluster as a user with sufficient permissions to complete this task. For example:cpd-cli manage login-to-ocp \ --username=${OCP_USERNAME} \ --password=${OCP_PASSWORD} \ --server=${OCP_URL}
Tip: Thelogin-to-ocp
command takes the same input as theoc login
command. Runoc login --help
for details.
Specifying which edition to install
DataStage is available in two different editions: DataStage Enterprise and DataStage Enterprise Plus. You must specify which edition to install.
- For DataStage Enterprise, run
export DATASTAGE_TYPE=datastage_ent
- For DataStage Enterprise Plus, run
export DATASTAGE_TYPE=datastage_ent_plus
Installing the operator
The DataStage operator simplifies the process of managing the DataStage service on Red Hat® OpenShift Container Platform.
To install DataStage, you must install the DataStage operator and create the Operator Lifecycle Manager (OLM) objects, such as the catalog source and subscription, for the operator.
- Who needs to complete this task?
- You must be a cluster administrator (or a user with the appropriate permissions to install operators) to create the OLM objects.
- When do you need to complete this task?
- Complete this task if the DataStage operator and other OLM artifacts have not been created for the current
release.
If you complete this task and the OLM artifacts already exist on the cluster, the
cpd-cli
detects that you already have the OLM objects for the components at the specified release, thecpd-cli
does not attempt to create the OLM objects again.
To install the operator:
- Create
the OLM objects for DataStage:
cpd-cli manage apply-olm \ --release=${VERSION} \ --cpd_operator_ns=${PROJECT_CPD_OPS} \ --components=${DATASTAGE_TYPE}
- If the command succeeds, it returns [SUCCESS]... The apply-olm command ran successfully.
- If the command fails, it returns [ERROR] and includes information about the cause of the failure.
What to do next: Install the DataStage service.
Installing the service
After the DataStage operator is installed, you can install DataStage.
- Who needs to complete this task?
- You must be an administrator of the project where you will install DataStage.
- When do you need to complete this task?
- Complete this task if you want to add DataStage to your environment.
To install the service:
- Create the custom resource for DataStage.
The command that you run depends on the storage on your cluster:
Red Hat OpenShift Data Foundation storage
Run the following command to create the custom resource.
cpd-cli manage apply-cr \ --components=${DATASTAGE_TYPE} \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
IBM Storage Scale Container Native storage
Run the following command to create the custom resource.
Remember: When you use IBM Storage Scale Container Native storage, both${STG_CLASS_BLOCK}
and${STG_CLASS_FILE}
point to the same storage class, typicallyibm-spectrum-scale-sc
.cpd-cli manage apply-cr \ --components=${DATASTAGE_TYPE} \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
Portworx storage
cpd-cli manage apply-cr \ --components=${DATASTAGE_TYPE} \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --storage_vendor=portworx \ --license_acceptance=true
NFS storage
Run the following command to create the custom resource.
Remember: When you use NFS storage, both${STG_CLASS_BLOCK}
and${STG_CLASS_FILE}
point to the same storage class, typicallymanaged-nfs-storage
.cpd-cli manage apply-cr \ --components=${DATASTAGE_TYPE} \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
AWS with EFS storage only
Run the following command to create the custom resource.
Remember: When you use EFS storage, both${STG_CLASS_BLOCK}
and${STG_CLASS_FILE}
point to the same RWX storage class.cpd-cli manage apply-cr \ --components=${DATASTAGE_TYPE} \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
AWS with EFS and EBS storage
Run the following command to create the custom resource.
cpd-cli manage apply-cr \ --components=${DATASTAGE_TYPE} \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
IBM Cloud with IBM Cloud File Storage only
Run the following command to create the custom resource.
Remember: When you use IBM Cloud File Storage storage, both${STG_CLASS_BLOCK}
and${STG_CLASS_FILE}
point to the same storage class, typicallyibmc-file-gold-gid
oribm-file-custom-gold-gid
.cpd-cli manage apply-cr \ --components=${DATASTAGE_TYPE} \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
IBM Cloud with IBM Cloud File Storage and IBM Cloud Block Storage
Run the following command to create the custom resource.
cpd-cli manage apply-cr \ --components=${DATASTAGE_TYPE} \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
NetApp Trident
Run the following command to create the custom resource.
Remember: When you use NetApp Trident storage, both${STG_CLASS_BLOCK}
and${STG_CLASS_FILE}
point to the same storage class.cpd-cli manage apply-cr \ --components=${DATASTAGE_TYPE} \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
Validating the installation
DataStage is installed when the apply-cr
command returns [SUCCESS]... The apply-cr command ran
successfully.
However, you can optionally run the cpd-cli
manage
get-cr-status
command if you want to confirm that the custom
resource status is Completed
:
cpd-cli manage get-cr-status \
--cpd_instance_ns=${PROJECT_CPD_INSTANCE} \
--components=${DATASTAGE_TYPE}