Upgrading Execution Engine for Apache Hadoop from Version 4.6.x to a later 4.6 refresh
A project administrator can upgrade Execution Engine for Apache Hadoop from one Cloud Pak for Data Version 4.6 refresh to a later 4.6 refresh.
- What permissions do you need to complete this task?
- The permissions that you need depend on which tasks you must complete:
- To update the Execution Engine for Apache
Hadoop operators, you must have the appropriate permissions to
create operators and you must be an administrator of the project where the Cloud Pak for Data operators are installed. This project is
identified by the
${PROJECT_CPD_OPS}
environment variable. - To upgrade Execution Engine for Apache
Hadoop, you must be an
administrator of the project where Execution Engine for Apache
Hadoop is installed. This project is identified by
the
${PROJECT_CPD_INSTANCE}
environment variable.
- To update the Execution Engine for Apache
Hadoop operators, you must have the appropriate permissions to
create operators and you must be an administrator of the project where the Cloud Pak for Data operators are installed. This project is
identified by the
- When do you need to complete this task?
- If you didn't upgrade Execution Engine for Apache
Hadoop when you upgraded the platform, you can complete this
task to upgrade your existing Execution Engine for Apache
Hadoop installation.
If you want to upgrade all of the Cloud Pak for Data components at the same time, follow the process in Upgrading the platform and services instead.
Important: All of the Cloud Pak for Data components in a deployment must be installed at the same release.
Information you need to complete this task
Review the following information before you upgrade Execution Engine for Apache Hadoop:
- Environment variables
- The commands in this task use environment variables so that you can run the commands exactly as
written.
- If you don't have the script that defines the environment variables, see Setting up installation environment variables.
- To use the environment variables from the script, you must source the environment variables
before you run the commands in this task, for
example:
source ./cpd_vars.sh
- Installation location
- Execution Engine for Apache
Hadoop is installed in the same project
(namespace) as the Cloud Pak for Data control
plane. This
project is identified by the
${PROJECT_CPD_INSTANCE}
environment variable.
- Common core services
- Execution Engine for Apache
Hadoop requires the Cloud Pak for Data
common core services.
If the common core services are not at the required version for the release, the common core services will be automatically upgraded when you upgrade Execution Engine for Apache Hadoop. This increases the amount of time the upgrade takes to complete.
- Storage requirements
- You must tell Execution Engine for Apache Hadoop what storage you use in your existing installation. You cannot change the storage that is associated with Execution Engine for Apache Hadoop during an upgrade. Ensure that the environment variables point to the correct storage classes for your environment.
Before you begin
This task assumes that the following prerequisites are met:
Prerequisite | Where to find more information |
---|---|
The cluster meets the minimum requirements for Execution Engine for Apache Hadoop. | If this task is not complete, see System requirements. |
The workstation from which you will run the upgrade is set up as a client workstation and
the cpd-cli has the latest version of the
olm-utils-play image. |
If this task is not complete, see Updating client workstations. |
The Cloud Pak for Data control plane is upgraded. | If this task is not complete, see Upgrading the platform and services. |
For environments that use a private container registry, such as air-gapped environments, the Execution Engine for Apache Hadoop software images are mirrored to the private container registry. | If this task is not complete, see Mirroring images to a private container registry. |
Prerequisite services
Before you upgrade Execution Engine for Apache Hadoop, ensure that the following services are upgraded and running:
- Watson™ Studio
Procedure
Complete the following tasks to upgrade Execution Engine for Apache Hadoop:
Logging in to the cluster
To run cpd-cli
manage
commands, you must log in to the cluster.
To log in to the cluster:
-
Run the
cpd-cli manage login-to-ocp
command to log in to the cluster as a user with sufficient permissions to complete this task. For example:cpd-cli manage login-to-ocp \ --username=${OCP_USERNAME} \ --password=${OCP_PASSWORD} \ --server=${OCP_URL}
Tip: Thelogin-to-ocp
command takes the same input as theoc login
command. Runoc login --help
for details.
Updating the operator
The Execution Engine for Apache Hadoop operator simplifies the process of managing the Execution Engine for Apache Hadoop service on Red Hat® OpenShift® Container Platform.
To upgrade Execution Engine for Apache
Hadoop, ensure that all of the Operator Lifecycle Manager (OLM) objects in the ${PROJECT_CPD_OPS}
project, such as the catalog sources and subscriptions,
are upgraded to the appropriate release. All of the OLM objects must be at the same release.
- Who needs to complete this task?
- You must be a cluster administrator (or a user with the appropriate permissions to install operators) to create the OLM objects.
- When do you need to complete this task?
- Complete this task only if the OLM artifacts have not been updated for the
current release using the
cpd-cli manage apply-olm
command with the--upgrade=true
option.It is not necessary to run this command multiple times for each service that you plan to upgrade. If you complete this task and the OLM artifacts already exist on the cluster, the
cpd-cli
will recreate the OLM objects for all of the existing components in the${PROJECT_CPD_OPS}
project.
To update the operator:
- Update
the OLM objects:
cpd-cli manage apply-olm \ --release=${VERSION} \ --cpd_operator_ns=${PROJECT_CPD_OPS} \ --upgrade=true
- If the command succeeds, it returns [SUCCESS]... The apply-olm command ran successfully.
- If the command fails, it returns [ERROR] and includes information about the cause of the failure.
What to do next: Upgrade the Execution Engine for Apache Hadoop service.
Upgrading the service
After the Execution Engine for Apache Hadoop operator is updated, you can upgrade Execution Engine for Apache Hadoop.
- Who needs to complete this task?
- You must be an administrator of the project where Execution Engine for Apache Hadoop is installed.
- When do you need to complete this task?
- Complete this task for each instance of Execution Engine for Apache Hadoop that is associated with an instance of Cloud Pak for Data Version 4.6.
To upgrade the service:
- Update the custom resource for Execution Engine for Apache
Hadoop.
The command that you run depends on the storage on your cluster:
Red Hat OpenShift Data Foundation storage
Run the following command to update the custom resource.
cpd-cli manage apply-cr \ --components=hee \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true \ --upgrade=true
IBM Storage Fusion storage
Run the following command to update the custom resource.
Remember: When you use IBM Storage Fusion storage, both${STG_CLASS_BLOCK}
and${STG_CLASS_FILE}
point to the same storage class.cpd-cli manage apply-cr \ --components=hee \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true \ --upgrade=true
IBM Storage Scale Container Native storage
Run the following command to update the custom resource.
Remember: When you use IBM Storage Scale Container Native storage, both${STG_CLASS_BLOCK}
and${STG_CLASS_FILE}
point to the same storage class.cpd-cli manage apply-cr \ --components=hee \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true \ --upgrade=true
Portworx storage
Run the following command to update the custom resource.
cpd-cli manage apply-cr \ --components=hee \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --storage_vendor=portworx \ --license_acceptance=true \ --upgrade=true
NFS storage
Run the following command to update the custom resource.
Remember: When you use NFS storage, both${STG_CLASS_BLOCK}
and${STG_CLASS_FILE}
point to the same storage class.cpd-cli manage apply-cr \ --components=hee \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true \ --upgrade=true
AWS with EFS storage only
Run the following command to update the custom resource.
Remember: When you use EFS storage, both${STG_CLASS_BLOCK}
and${STG_CLASS_FILE}
point to the same RWX storage class.cpd-cli manage apply-cr \ --components=hee \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true \ --upgrade=true
AWS with EFS and EBS storage
Run the following command to update the custom resource.
cpd-cli manage apply-cr \ --components=hee \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
IBM Cloud with IBM Cloud File Storage and IBM Cloud Block Storage
Run the following command to update the custom resource.
cpd-cli manage apply-cr \ --components=hee \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true \ --upgrade=true
NetApp Trident
Run the following command to create the custom resource.
Remember: When you use NetApp Trident storage, both${STG_CLASS_BLOCK}
and${STG_CLASS_FILE}
point to the same storage class.cpd-cli manage apply-cr \ --components=hee \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \ --block_storage_class=${STG_CLASS_BLOCK} \ --file_storage_class=${STG_CLASS_FILE} \ --license_acceptance=true
Validating the upgrade
Execution Engine for Apache
Hadoop is upgraded when the apply-cr
command returns [SUCCESS]... The apply-cr command ran
successfully.
However, you can optionally run the cpd-cli
manage
get-cr-status
command if you want to confirm that the custom
resource status is Completed
:
cpd-cli manage get-cr-status \
--cpd_instance_ns=${PROJECT_CPD_INSTANCE} \
--components=hee
What to do next
You must complete the following tasks in order before users can access Execution Engine for Apache Hadoop:- Complete the post-upgrade tasks for the service.
- To get started with Execution Engine for Apache Hadoop, see Analyzing Apache Hadoop data.