Installing Execution Engine for Apache Hadoop

A project administrator can install Execution Engine for Apache Hadoop on IBM Cloud Pak® for Data.

What permissions do you need to complete this task?
The permissions that you need depend on which tasks you must complete:
  • To install the Execution Engine for Apache Hadoop operators, you must have the appropriate permissions to create operators and you must be an administrator of the project where the Cloud Pak for Data operators are installed. This project is identified by the ${PROJECT_CPD_OPS} environment variable.
  • To install Execution Engine for Apache Hadoop, you must be an administrator of the project where you will install Execution Engine for Apache Hadoop. This project is identified by the ${PROJECT_CPD_INSTANCE} environment variable.
When do you need to complete this task?
If you didn't install Execution Engine for Apache Hadoop when you installed the platform, you can complete this task to add Execution Engine for Apache Hadoop to your environment.

If you want to install all of the Cloud Pak for Data components at the same time, follow the process in Installing the platform and services instead.

Important: All of the Cloud Pak for Data components in a deployment must be installed at the same release.

Information you need to complete this task

Review the following information before you install Execution Engine for Apache Hadoop:

Environment variables
The commands in this task use environment variables so that you can run the commands exactly as written.
  • If you don't have the script that defines the environment variables, see Setting up installation environment variables.
  • To use the environment variables from the script, you must source the environment variables before you run the commands in this task, for example:
    source ./cpd_vars.sh
Security context constraint requirements
Execution Engine for Apache Hadoop uses the restricted security context constraint (SCC).
Installation location
Execution Engine for Apache Hadoop must be installed in the same project (namespace) as the Cloud Pak for Data control plane. This project is identified by the ${PROJECT_CPD_INSTANCE} environment variable.
Common core services
Execution Engine for Apache Hadoop requires the Cloud Pak for Data common core services.

If the common core services are not installed in the project where you plan to install Execution Engine for Apache Hadoop, the common core services are automatically installed when you install Execution Engine for Apache Hadoop. This increases the amount of time the installation takes to complete.

Storage requirements
You must tell Execution Engine for Apache Hadoop what storage to use. The following storage classes are recommended. However, if you don't use these storage classes on your cluster, ensure that you specify a storage class with an equivalent definition.
Storage Notes Storage classes
OpenShift® Data Foundation When you install the service, specify file storage. If you specify block storage, the service ignores this information. File storage: ocs-storagecluster-cephfs
IBM® Storage Fusion When you install the service, specify file storage. If you specify block storage, the service ignores this information. File storage: ibm-spectrum-scale-sc
IBM Storage Scale Container Native When you install the service, specify file storage. If you specify block storage, the service ignores this information. File storage: ibm-spectrum-scale-sc
Portworx When you install the service, the --storage_vendor=portworx option ensures that the service uses the correct storage classes. File storage: portworx-rwx-gp3-sc

(Equivalent to portworx-shared-gp3 in older installations)

NFS When you install the service, specify file storage. If you specify block storage, the service ignores this information. File storage: managed-nfs-storage
Amazon Elastic storage When you install the service, specify file storage. If you specify block storage, the service ignores this information.

File storage is provided by Amazon Elastic File System.

File storage: efs-nfs-client
IBM Cloud storage Not supported. Not applicable.
NetApp Trident When you install the service, specify file storage. If you specify block storage, the service ignores this information. File storage: ontap-nas

Before you begin

This task assumes that the following prerequisites are met:

Prerequisite Where to find more information
The cluster meets the minimum requirements for installing Execution Engine for Apache Hadoop. If this task is not complete, see System requirements.
The workstation from which you will run the installation is set up as a client workstation and includes the following command-line interfaces:
  • Cloud Pak for Data CLI: cpd-cli
  • OpenShift CLI: oc
If this task is not complete, see Setting up a client workstation.
The Cloud Pak for Data control plane is installed. If this task is not complete, see Installing the platform and services.
For environments that use a private container registry, such as air-gapped environments, the Execution Engine for Apache Hadoop software images are mirrored to the private container registry. If this task is not complete, see Mirroring images to a private container registry.

Prerequisite services

Before you install Execution Engine for Apache Hadoop, ensure that the following services are installed and running:

Procedure

Complete the following tasks to install Execution Engine for Apache Hadoop:

  1. Logging in to the cluster
  2. Installing the operator
  3. Installing the service
  4. Validating the installation
  5. What to do next

Logging in to the cluster

To run cpd-cli manage commands, you must log in to the cluster.

To log in to the cluster:

  1. Run the cpd-cli manage login-to-ocp command to log in to the cluster as a user with sufficient permissions to complete this task. For example:
    cpd-cli manage login-to-ocp \
    --username=${OCP_USERNAME} \
    --password=${OCP_PASSWORD} \
    --server=${OCP_URL}
    Tip: The login-to-ocp command takes the same input as the oc login command. Run oc login --help for details.

Installing the operator

The Execution Engine for Apache Hadoop operator simplifies the process of managing the Execution Engine for Apache Hadoop service on Red Hat® OpenShift Container Platform.

To install Execution Engine for Apache Hadoop, you must install the Execution Engine for Apache Hadoop operator and create the Operator Lifecycle Manager (OLM) objects, such as the catalog source and subscription, for the operator.

Who needs to complete this task?
You must be a cluster administrator (or a user with the appropriate permissions to install operators) to create the OLM objects.
When do you need to complete this task?
Complete this task if the Execution Engine for Apache Hadoop operator and other OLM artifacts have not been created for the current release.

If you complete this task and the OLM artifacts already exist on the cluster, the cpd-cli detects that you already have the OLM objects for the components at the specified release, the cpd-cli does not attempt to create the OLM objects again.

To install the operator:

  1. Create the OLM objects for Execution Engine for Apache Hadoop:
    cpd-cli manage apply-olm \
    --release=${VERSION} \
    --cpd_operator_ns=${PROJECT_CPD_OPS} \
    --components=hee
    • If the command succeeds, it returns [SUCCESS]... The apply-olm command ran successfully.
    • If the command fails, it returns [ERROR] and includes information about the cause of the failure.

What to do next: Install the Execution Engine for Apache Hadoop service.

Installing the service

After the Execution Engine for Apache Hadoop operator is installed, you can install Execution Engine for Apache Hadoop.

Who needs to complete this task?
You must be an administrator of the project where you will install Execution Engine for Apache Hadoop.
When do you need to complete this task?
Complete this task if you want to add Execution Engine for Apache Hadoop to your environment.

To install the service:

  1. Create the custom resource for Execution Engine for Apache Hadoop.

    The command that you run depends on the storage on your cluster:


    Red Hat OpenShift Data Foundation storage

    Run the following command to create the custom resource.

    cpd-cli manage apply-cr \
    --components=hee \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true

    IBM Storage Fusion storage

    Run the following command to create the custom resource.

    Remember: When you use IBM Storage Fusion storage, both ${STG_CLASS_BLOCK} and ${STG_CLASS_FILE} point to the same storage class, typically ibm-spectrum-scale-sc.
    cpd-cli manage apply-cr \
    --components=hee \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true

    IBM Storage Scale Container Native storage

    Run the following command to create the custom resource.

    Remember: When you use IBM Storage Scale Container Native storage, both ${STG_CLASS_BLOCK} and ${STG_CLASS_FILE} point to the same storage class, typically ibm-spectrum-scale-sc.
    cpd-cli manage apply-cr \
    --components=hee \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true

    Portworx storage
    cpd-cli manage apply-cr \
    --components=hee \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \
    --storage_vendor=portworx \
    --license_acceptance=true

    NFS storage

    Run the following command to create the custom resource.

    Remember: When you use NFS storage, both ${STG_CLASS_BLOCK} and ${STG_CLASS_FILE} point to the same storage class, typically managed-nfs-storage.
    cpd-cli manage apply-cr \
    --components=hee \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true

    AWS with EFS storage only

    Run the following command to create the custom resource.

    Remember: When you use EFS storage, both ${STG_CLASS_BLOCK} and ${STG_CLASS_FILE} point to the same RWX storage class.
    cpd-cli manage apply-cr \
    --components=hee \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true

    AWS with EFS and EBS storage

    Run the following command to create the custom resource.

    cpd-cli manage apply-cr \
    --components=hee \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true

    IBM Cloud with IBM Cloud File Storage and IBM Cloud Block Storage

    Run the following command to create the custom resource.

    cpd-cli manage apply-cr \
    --components=hee \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true

    NetApp Trident

    Run the following command to create the custom resource.

    Remember: When you use NetApp Trident storage, both ${STG_CLASS_BLOCK} and ${STG_CLASS_FILE} point to the same storage class.
    cpd-cli manage apply-cr \
    --components=hee \
    --release=${VERSION} \
    --cpd_instance_ns=${PROJECT_CPD_INSTANCE} \
    --block_storage_class=${STG_CLASS_BLOCK} \
    --file_storage_class=${STG_CLASS_FILE} \
    --license_acceptance=true

Validating the installation

Execution Engine for Apache Hadoop is installed when the apply-cr command returns [SUCCESS]... The apply-cr command ran successfully.

However, you can optionally run the cpd-cli manage get-cr-status command if you want to confirm that the custom resource status is Completed:

cpd-cli manage get-cr-status \
--cpd_instance_ns=${PROJECT_CPD_INSTANCE} \
--components=hee

What to do next

You must complete the tasks specified in Post-installation setup before users can access Execution Engine for Apache Hadoop.