Installing Watson Machine Learning Accelerator
A project administrator can install Watson Machine Learning Accelerator on IBM Cloud Pak® for Data.
- What permissions do you need to complete this task?
- The permissions that you need depend on which tasks you must complete:
- To install the Watson Machine Learning Accelerator operators, you must have the appropriate permissions to
create operators and you must be an administrator of the project where the Cloud Pak for Data operators are installed. This project is identified by
the
${PROJECT_CPD_OPS}
environment variable. - To install Watson Machine Learning Accelerator, you must be an administrator of the project where you will
install Watson Machine Learning Accelerator. This project is identified by the
${PROJECT_CPD_INSTANCE}
environment variable. - To provision the Watson Machine Learning Accelerator service
instance to a tethered project, you must be an administrator of the tethered project. The project is
identified by the
${PROJECT_TETHERED}
environment variable.
- To install the Watson Machine Learning Accelerator operators, you must have the appropriate permissions to
create operators and you must be an administrator of the project where the Cloud Pak for Data operators are installed. This project is identified by
the
- When do you need to complete this task?
- If you didn't install Watson Machine Learning Accelerator when you installed the platform, you can complete
this task to add Watson Machine Learning Accelerator to your environment.
If you want to install all of the Cloud Pak for Data components at the same time, follow the process in Installing the platform and services instead.
Important: All of the Cloud Pak for Data components in a deployment must be installed at the same release.
Information you need to complete this task
Review the following information before you install Watson Machine Learning Accelerator:
- Environment variables
- The commands in this task use environment variables so that you can run the commands exactly as
written.
- If you don't have the script that defines the environment variables, see Setting up installation environment variables.
- To use the environment variables from the script, you must source the environment variables
before you run the commands in this task, for
example:
source ./cpd_vars.sh
- Security context constraint requirements
- Watson Machine Learning Accelerator uses the
restricted
security context constraint (SCC).
- Installation location
- Watson Machine Learning Accelerator is installed in the same
project (namespace) as the Cloud Pak for Data control
plane. This
project is identified by the
${PROJECT_CPD_INSTANCE}
environment variable.When you install Watson Machine Learning Accelerator, you can optionally deploy the Watson Machine Learning Accelerator service instance in a tethered project. The tethered project is identified by the
${PROJECT_TETHERED}
environment variable.
- Storage requirements
- You must tell Watson Machine Learning Accelerator what storage to use. The following storage classes are recommended. However, if you don't use these storage classes on your cluster, ensure that you specify a storage class with an equivalent definition.
Before you begin
This task assumes that the following prerequisites are met:
Prerequisite | Where to find more information |
---|---|
The cluster meets the minimum requirements for installing Watson Machine Learning Accelerator. | If this task is not complete, see System requirements. |
The workstation from which you will run the installation is set up as a client workstation
and includes the following command-line interfaces:
|
If this task is not complete, see Setting up a client workstation. |
The Cloud Pak for Data control plane is installed. | If this task is not complete, see Installing the platform and services. |
The project where you plan to deploy the Watson Machine Learning Accelerator service instance exists or you have the appropriate permissions to create projects. | If this task is not complete, see Setting up projects (namespaces). |
For environments that use a private container registry, such as air-gapped environments, the Watson Machine Learning Accelerator software images are mirrored to the private container registry. | If this task is not complete, see Mirroring images to a private container registry. |
The node settings are adjusted for Watson Machine Learning Accelerator. | If this task is not complete, see Changing required node settings. |
Prerequisite services
Before you install Watson Machine Learning Accelerator, ensure that the following services are installed and running:
- You must install the scheduling service. See Shared cluster components.
Prerequisite operators
- x86-64
-
- On OpenShift 4.8, use NVIDIA GPU Operator 1.10 or 1.9
- On OpenShift 4.10, use NVIDIA GPU Operator v22.9.0, v22.9.1, or v22.9.2
- On OpenShift 4.12, use NVIDIA GPU Operator v22.9.2
For more information on deploying the NVIDIA GPU Operator on a cluster connected to the internet, see: Installing the NVIDIA GPU Operator on OpenShift. To install the NVIDIA GPU Operator on an air-gapped cluster, see: Deploy GPU Operators in a disconnected or airgapped environment.
Procedure
Complete the following tasks to install Watson Machine Learning Accelerator:
Logging in to the cluster
To run cpd-cli
manage
commands, you must log in to the cluster.
To log in to the cluster:
-
Run the
cpd-cli manage login-to-ocp
command to log in to the cluster as a user with sufficient permissions to complete this task. For example:cpd-cli manage login-to-ocp \ --username=${OCP_USERNAME} \ --password=${OCP_PASSWORD} \ --server=${OCP_URL}
Tip: Thelogin-to-ocp
command takes the same input as theoc login
command. Runoc login --help
for details.
Installing the operator
The Watson Machine Learning Accelerator operator simplifies the process of managing the Watson Machine Learning Accelerator service on Red Hat® OpenShift Container Platform.
To install Watson Machine Learning Accelerator, you must install the Watson Machine Learning Accelerator operator and create the Operator Lifecycle Manager (OLM) objects, such as the catalog source and subscription, for the operator.
- Who needs to complete this task?
- You must be a cluster administrator (or a user with the appropriate permissions to install operators) to create the OLM objects.
- When do you need to complete this task?
- Complete this task if the Watson Machine Learning Accelerator operator and other OLM artifacts have not been created for the
current release.
If you complete this task and the OLM artifacts already exist on the cluster, the
cpd-cli
detects that you already have the OLM objects for the components at the specified release, thecpd-cli
does not attempt to create the OLM objects again.
To install the operator:
- Create
the OLM objects for Watson Machine Learning Accelerator:
cpd-cli manage apply-olm \ --release=${VERSION} \ --cpd_operator_ns=${PROJECT_CPD_OPS} \ --components=wml_accelerator
- If the command succeeds, it returns [SUCCESS]... The apply-olm command ran successfully.
- If the command fails, it returns [ERROR] and includes information about the cause of the failure.
What to do next: Install the Watson Machine Learning Accelerator service.
Specifying additional installation options
You can optionally specify the following settings after you install Watson Machine Learning Accelerator by modifying the customer resource (CR) accordingly.
- Service replicas
- Configure the serviceReplicas setting by setting the replica value to
1 or greater. This controls the number of pods used by core services in Watson Machine Learning Accelerator.
- To disable multiple service replicas, set this value to 1.
- To enable multiple service replicas, set this value to 2. Setting this value greater than 2 increases the number of replicas.
To set this option modify the CR:- If the service is installed in the same project as the control plane, run:
cpd-cli manage update-cr \ --component=wml_accelerator_instance \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --patch='{\"serviceReplicas\":2}'
- If the service is installed in a tethered project,
run:
cpd-cli manage update-cr \ --component=wml_accelerator_instance \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --tethered_instance_ns=${PROJECT_TETHERED} \ --patch='{\"serviceReplicas\":2}'
To verify that the update was completed, run:cpd-cli manage get-cr-status --cpd_instance_ns=$PROJECT_CPD_INSTANCE --components=wml_accelerator_instance
- Scaling configuration
- Configure the scaleConfig setting. By default, the service uses a small
deployment. This value can be set to small, medium or large.
See details, Scaling services.
To set this option modify the CR:
- If the service is installed in the same project as the control plane, run:
cpd-cli manage update-cr \ --component=wml_accelerator_instance \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --patch='{\"scaleConfig\":\"medium\"}'
- If the service is installed in a tethered project,
run:
cpd-cli manage update-cr \ --component=wml_accelerator_instance \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --tethered_instance_ns=${PROJECT_TETHERED} \ --patch='{\"scaleConfig\":\"medium\"}'
To verify that the update was completed, run:cpd-cli manage get-cr-status --cpd_instance_ns=$PROJECT_CPD_INSTANCE --components=wml_accelerator_instance
- If the service is installed in the same project as the control plane, run:
- Egress network policy
- Configure the egressPolicy setting in the service CR. By default, the
EgressNetworkPolicy is disabled. For example:
egressPolicy: enableEgressWmlaNamespace: true enableEgressWorkerPlaneNamespace: true allowCidrSelectors: - "10.254.0.0/16" allowDnsNames: - "www.ibm.com"
- Configure the enableEgressWmlaNamespace setting by setting the value to
true
for the EgressNetworkPolicy to be created in the namespace where Watson Machine Learning Accelerator is installed. By default, this value is set tofalse
and no egress firewall policy is defined. By enabling enableEgressWmlaNamespace, the network policy allows pods in the namespace to outbound traffic to pods in all namespaces within the cluster but blocks any external communication.enableEgressWmlaNamespace: true
- Configure the enableEgressWorkerPlaneNamespace setting if you have set
workerPlaneNamespace and want the EgressNetworkPolicy to be created in the
worker plane namespace. By default, this value is set to
false
. By enabling enableEgressWorkerPlaneNamespace, the network policy allows pods in the worker plane namespace to outbound traffic to pods in all namespaces within the cluster but blocks any external communication.enableEgressWorkerPlaneNamespace: true
- In order for the egress network policy to be enabled, you must also set
allowCidrSelectors. By default, this setting is empty. To obtain the value for
this setting, issue the
oc describe network.config/cluster | grep Cidr
command. For example:allowCidrSelectors: "10.254.0.0/16"
- Optionally, you can provide a list of DNS names that pods are allowed to receive outbound
traffic from. For example:
allowDnsNames: "www.ibm.com"
- Configure the enableEgressWmlaNamespace setting by setting the value to
Installing the service
After the Watson Machine Learning Accelerator operator is installed, you can install Watson Machine Learning Accelerator.
- Who needs to complete this task?
- You must be an administrator of the project where you will install Watson Machine Learning Accelerator.
- When do you need to complete this task?
- Complete this task if you want to add Watson Machine Learning Accelerator to your environment.
To install the service:
- Create the custom resource for Watson Machine Learning Accelerator.
The command that you run depends on the storage on your cluster:
Red Hat OpenShift Data Foundation storage
Run the following command to create the custom resource.
- To install the service and the service instance in the same project as the control plane, run:
-
cpd-cli manage apply-cr \ --components=wml_accelerator,wml_accelerator_instance \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
- To install the service instance in a tethered project, run:
-
cpd-cli manage apply-cr \ --components=wml_accelerator,wml_accelerator_instance \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --tethered_instance_ns=${PROJECT_TETHERED} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
IBM Storage Fusion storage
Run the following command to create the custom resource.
Remember: When you use IBM Storage Fusion storage, both${STG_CLASS_BLOCK}
and${STG_CLASS_FILE}
point to the same storage class, typicallyibm-spectrum-scale-sc
.- To install the service and the service instance in the same project as the control plane, run:
-
cpd-cli manage apply-cr \ --components=wml_accelerator,wml_accelerator_instance \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
- To install the service instance in a tethered project, run:
-
cpd-cli manage apply-cr \ --components=wml_accelerator,wml_accelerator_instance \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --tethered_instance_ns=${PROJECT_TETHERED} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
IBM Storage Scale Container Native storage
Run the following command to create the custom resource.
Remember: When you use IBM Storage Scale Container Native storage, both${STG_CLASS_BLOCK}
and${STG_CLASS_FILE}
point to the same storage class, typicallyibm-spectrum-scale-sc
.- To install the service and the service instance in the same project as the control plane, run:
-
cpd-cli manage apply-cr \ --components=wml_accelerator,wml_accelerator_instance \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
- To install the service instance in a tethered project, run:
-
cpd-cli manage apply-cr \ --components=wml_accelerator,wml_accelerator_instance \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --tethered_instance_ns=${PROJECT_TETHERED} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
Portworx storage
- To install the service and the service instance in the same project as the control plane, run:
-
cpd-cli manage apply-cr \ --components=wml_accelerator,wml_accelerator_instance \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --storage_vendor=portworx \ --license_acceptance=true
- To install the service instance in a tethered project, run:
NFS storage
Run the following command to create the custom resource.
Remember: When you use NFS storage, both${STG_CLASS_BLOCK}
and${STG_CLASS_FILE}
point to the same storage class, typicallymanaged-nfs-storage
.- To install the service and the service instance in the same project as the control plane, run:
-
cpd-cli manage apply-cr \ --components=wml_accelerator,wml_accelerator_instance \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
- To install the service instance in a tethered project, run:
-
cpd-cli manage apply-cr \ --components=wml_accelerator,wml_accelerator_instance \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --tethered_instance_ns=${PROJECT_TETHERED} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
AWS with EFS storage only
Run the following command to create the custom resource.
Remember: When you use EFS storage, both${STG_CLASS_BLOCK}
and${STG_CLASS_FILE}
point to the same RWX storage class.- To install the service and the service instance in the same project as the control plane, run:
-
cpd-cli manage apply-cr \ --components=wml_accelerator,wml_accelerator_instance \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
- To install the service instance in a tethered project, run:
-
cpd-cli manage apply-cr \ --components=wml_accelerator,wml_accelerator_instance \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --tethered_instance_ns=${PROJECT_TETHERED} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
NetApp Trident
Run the following command to create the custom resource.
Remember: When you use NetApp Trident storage, both${STG_CLASS_BLOCK}
and${STG_CLASS_FILE}
point to the same storage class.- To install the service and the service instance in the same project as the control plane, run:
-
cpd-cli manage apply-cr \ --components=wml_accelerator,wml_accelerator_instance \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
- To install the service instance in a tethered project, run:
-
cpd-cli manage apply-cr \ --components=wml_accelerator,wml_accelerator_instance \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --tethered_instance_ns=${PROJECT_TETHERED} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
Verifying options
- Service replicas
- To verify the service replicas option:
- If the service instance is in the same project as the control plane, run:
oc get wmla -o jsonpath='{.items[0].spec.serviceReplicas} {"\n"}' -n ${PROJECT_CPD_INSTANCE}
- If the service instance is in a tethered project, run:
oc get wmla -o jsonpath='{.items[0].spec.serviceReplicas} {"\n"}' -n ${PROJECT_TETHERED}
- If the service instance is in the same project as the control plane, run:
- Scaling configuration
- To verify the scaling configuration option:
- If the service instance is in the same project as the control plane, run:
oc get wmla -o jsonpath='{.items[0].spec.scaleConfig} {"\n"}' -n ${PROJECT_CPD_INSTANCE}
- If the service instance is in a tethered project, run:
oc get wmla -o jsonpath='{.items[0].spec.scaleConfig} {"\n"}' -n ${PROJECT_TETHERED}
- If the service instance is in the same project as the control plane, run:
Validating the installation
Watson Machine Learning Accelerator is installed when the apply-cr
command returns [SUCCESS]... The apply-cr command ran
successfully.
However, you can optionally run the cpd-cli
manage
get-cr-status
command if you want to confirm that the custom
resource status is Completed
:
Verifying conda
oc rsh wmla-conda-* find /var/shareDir/dli/work/conda_synced
If it
does not exist, the command returns No such file or directory
.What to do next
Complete the post-installation setup for Watson Machine Learning Accelerator, see: Post-installation setup for Watson Machine Learning Accelerator