Upgrading Analytics Engine powered by Apache Spark from Version 5.1.x to a later 5.1 refresh
An instance administrator can upgrade Analytics Engine powered by Apache Spark from Version 5.1.x to a later 5.1 refresh.
- Who needs to complete this task?
-
Instance administrator To upgrade Analytics Engine powered by Apache Spark, you must be an instance administrator. An instance administrator has permission to manage software in the following projects:
- The operators project for the instance
-
The operators for this instance of Analytics Engine powered by Apache Spark are installed in the operators project. In the upgrade commands, the
${PROJECT_CPD_INST_OPERATORS}environment variable refers to the operators project. - The operands project for the instance
-
The custom resources for the control plane and Analytics Engine powered by Apache Spark are installed in the operands project. In the upgrade commands, the
${PROJECT_CPD_INST_OPERANDS}environment variable refers to the operands project.
- When do you need to complete this task?
-
Review the following options to determine whether you need to complete this task:
- If you want to upgrade the IBM Software Hub control plane and one or more services at the same time, follow the process in Upgrading an instance of IBM Software Hub instead.
- If you didn't upgrade Analytics Engine powered by Apache
Spark when you upgraded the IBM Software Hub
control plane, complete this task to upgrade Analytics Engine powered by Apache
Spark.
Repeat as needed If you are responsible for multiple instances of IBM Software Hub, you can repeat this task to upgrade more instances of Analytics Engine powered by Apache Spark on the cluster.
Information you need to complete this task
Review the following information before you upgrade Analytics Engine powered by Apache Spark:
- Version requirements
-
All the components that are associated with an instance of IBM Software Hub must be installed at the same release. For example, if the IBM Software Hub control plane is at Version 5.1.3, you must upgrade Analytics Engine powered by Apache Spark to Version 5.1.3.
- Environment variables
- The commands in this task use environment variables so that you can run the commands exactly as
written.
- If you don't have the script that defines the environment variables, see Setting up installation environment variables.
- To use the environment variables from the script, you must source the environment variables
before you run the commands in this task. For example,
run:
source ./cpd_vars.sh
Before you begin
This task assumes that the following prerequisites are met:
| Prerequisite | Where to find more information |
|---|---|
| The cluster meets the minimum requirements for Analytics Engine powered by Apache Spark. | If this task is not complete, see System requirements. |
The workstation from which you will run the upgrade is set up as a client workstation and
the following command-line interfaces:
|
If this task is not complete, see Updating client workstations. |
| The IBM Software Hub control plane is upgraded. | If this task is not complete, see Upgrading an instance of IBM Software Hub. |
| For environments that use a private container registry, such as air-gapped environments, the Analytics Engine powered by Apache Spark software images are mirrored to the private container registry. | If this task is not complete, see Mirroring images to a private container registry. |
For environments that use a private container registry, such as air-gapped environments,
the cpd-cli is configured to pull the olm-utils-v3 image from the private container registry. |
If this task is not complete, see Pulling the olm-utils-v3 image from the private container registry. |
Procedure
Complete the following tasks to upgrade Analytics Engine powered by Apache Spark:
Analytics Engine powered by Apache Spark parameters
If you plan to install Analytics Engine powered by Apache Spark, you
can specify the following installation options in a file named install-options.yml in the cpd-cli
work directory (For example: cpd-cli-workspace/olm-utils-workspace/work).
The parameters are optional. If you do not set these installation parameters, the default values are used. Uncomment the parameters that you want to override and update the values appropriately.
The sample YAML content uses the default values.
################################################################################
# Analytics Engine powered by Apache Spark parameters
################################################################################
# ------------------------------------------------------------------------------
# Analytics Engine powered by Apache Spark service configuration parameters
# ------------------------------------------------------------------------------
#analyticsengine_spark_adv_enabled: true
#analyticsengine_job_auto_delete_enabled: true
#analyticsengine_kernel_cull_time: 30
#analyticsengine_image_pull_parallelism: "40"
#analyticsengine_image_pull_completions: "20"
#analyticsengine_kernel_cleanup_schedule: "*/30 * * * *"
#analyticsengine_job_cleanup_schedule: "*/30 * * * *"
#analyticsengine_skip_selinux_relabeling: false
#analyticsengine_mount_customizations_from_cchome: false
# ------------------------------------------------------------------------------
# Spark runtime configuration parameters
# ------------------------------------------------------------------------------
#analyticsengine_max_driver_cpu_cores: 5 # The number of CPUs to allocate to the Spark jobs driver. The default is 5.
#analyticsengine_max_executor_cpu_cores: 5 # The number of CPUs to allocate to the Spark jobs executor. The default is 5.
#analyticsengine_max_driver_memory: "50g" # The amount of memory, in gigabytes to allocate to the driver. The default is 50g.
#analyticsengine_max_executor_memory: "50g" # The amount of memory, in gigabytes to allocate to the executor. The default is 50g.
#analyticsengine_max_num_workers: 50 # The number of workers (also called executors) to allocate to spark jobs. The default is 50.
#analyticsengine_local_dir_scale_factor: 10 # The number that is used to calculate the temporary disk size on Spark nodes. The formula is temp_disk_size = number_of_cpu * local_dir_scale_factor. The default is 10.
- Analytics Engine powered by Apache Spark service configuration parameters
-
The service configuration parameters determine how the Analytics Engine powered by Apache Spark service behaves.
Property Description analyticsengine_spark_adv_enabledSpecify whether to display the job UI. - Default value
true- Valid values
-
false- Do not display the job UI.
true- Display the job UI.
analyticsengine_job_auto_delete_enabledSpecify whether to automatically delete jobs after they reach a terminal state, such as FINISHEDorFAILED. The default is true.- Default value
true- Valid values
-
true- Delete jobs after they reach a terminal state.
false- Retain jobs after they reach a terminal state.
analyticsengine_kernel_cull_timeThe amount of time, in minutes, idle kernels are kept. - Default value
30- Valid values
- An integer greater than 0.
analyticsengine_image_pull_parallelismThe number of pods that are scheduled to pull the Spark image in parallel. For example, if you have 100 nodes in the cluster, set:
analyticsengine_image_pull_completions: "100"analyticsengine_image_pull_parallelism: "150"
In this example, at least 100 nodes will pull the image successfully with 150 pods pulling the image in parallel.
- Default value
"40"- Valid values
- An integer greater than or equal to 1.
Increase this value only if you have a very large cluster and you have sufficient network bandwidth and disk I/O to support more pulls in parallel.
analyticsengine_image_pull_completionsThe number of pods that should be completed in order for the image pull job to be completed. For example, if you have 100 nodes in the cluster, set:
analyticsengine_image_pull_completions: "100"analyticsengine_image_pull_parallelism: "150"
In this example, at least 100 nodes will pull the image successfully with 150 pods pulling the image in parallel.
- Default value
"20"- Valid values
- An integer greater than or equal to 1.
Increase this value only if you have a very large cluster and you have sufficient network bandwidth and disk I/O to support more pulls in parallel.
analyticsengine_kernel_cleanup_scheduleOverride the analyticsengine_kernel_cull_timesetting for the kernel cleanupCronJob.By default, the kernel cleanup
CronJobruns every 30 minutes.- Default value
"*/30 * * * *"- Valid values
- A string that uses the
CronJobschedule syntax.
analyticsengine_job_cleanup_scheduleOverride the analyticsengine_kernel_cull_timesetting for the job cleanupCronJob.By default, the job cleanup
CronJobruns every 30 minutes.- Default value
"*/30 * * * *"- Valid values
- A string that uses the
CronJobschedule syntax.
analyticsengine_skip_selinux_relabelingSpecify whether to skip the SELinux relabeling. To use this feature, you must create the required
MachineConfigandRuntimeClassdefinitions. For more information, see EnablingMachineConfigandRuntimeClassdefinitions for certain properties.- Default value
false- Valid values
-
false- Do not skip the SELinux relabeling.
true- Skip the SELinux relabeling.
analyticsengine_mount_customizations_from_cchomeSpecify whether to you want to enable custom drivers. These drivers need to be mounted from the cc-home-pvc directory. Common core services This feature is available only when the Cloud Pak for Data common core services are installed.
- Default value
false- Valid values
-
false- You do not want to use custom drivers.
true- You want to enable custom drivers.
- Spark runtime configuration parameters
-
The runtime configuration parameters determine how the Spark runtimes generated by the Analytics Engine powered by Apache Spark service behave.
Property Description analyticsengine_max_driver_cpu_coresThe number of CPUs to allocate to the Spark jobs driver. - Default value
5- Valid values
- An integer greater than or equal to 1.
analyticsengine_max_executor_cpu_coresThe number of CPUs to allocate to the Spark jobs executor. - Default value
5- Valid values
- An integer greater than or equal to 1.
analyticsengine_max_driver_memoryThe amount of memory, in gigabytes to allocate to the driver. - Default value
"50g"- Valid values
- An integer greater than or equal to 1.
analyticsengine_max_executor_memoryThe amount of memory, in gigabytes to allocate to the executor. - Default value
"50g"- Valid values
- An integer greater than or equal to 1.
analyticsengine_max_num_workerThe number of workers (also called executors) to allocate to Spark jobs. - Default value
50- Valid values
- An integer greater than or equal to 1.
analyticsengine_local_dir_scale_factorThe number that is used to calculate the temporary disk size on Spark nodes. The formula is:
temp_disk_size = number_of_cpu * local_dir_scale_factor- Default value
10- Valid values
- An integer greater than or equal to 1.
Upgrading the service
cpd-cli
manage
apply-olm updates all of the OLM objects in the operators project
at the same time.To upgrade Analytics Engine powered by Apache Spark:
-
Log the
cpd-cliin to the Red Hat® OpenShift Container Platform cluster:${CPDM_OC_LOGIN}Remember:CPDM_OC_LOGINis an alias for thecpd-cli manage login-to-ocpcommand. - Update the custom resource for Analytics Engine powered by Apache
Spark.
Run the appropriate command to create the custom resource.
- Default installation (without installation options)
-
cpd-cli manage apply-cr \ --components=analyticsengine \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --license_acceptance=true \ --upgrade=true - Custom installation (with installation options)
-
cpd-cli manage apply-cr \ --components=analyticsengine \ --release=${VERSION} \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --param-file=/tmp/work/install-options.yml \ --license_acceptance=true \ --upgrade=true
Validating the upgrade
apply-cr command
returns:[SUCCESS]... The apply-cr command ran successfully
If you want to confirm that the custom resource status is
Completed, you can run the cpd-cli
manage
get-cr-status command:
cpd-cli manage get-cr-status \
--cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \
--components=analyticsengine
Upgrading existing service instances
After you upgrade Analytics Engine powered by Apache Spark, you must upgrade any service instances that are associated with Analytics Engine powered by Apache Spark.
After you upgrade Analytics Engine powered by Apache Spark, you must upgrade any service instances that are associated with Analytics Engine powered by Apache Spark.
- Before you begin
-
Create a profile on the workstation from which you will upgrade the service instances.
The profile must be associated with a IBM Software Hub user who has either the following permissions:
- Create service instances (
can_provision) - Manage service instances (
manage_service_instances)
For more information, see Creating a profile to use the cpd-cli management commands.
- Create service instances (
- Procedure
-
To upgrade the service instances:
cpd-cli service-instance upgrade \ --service-type=spark \ --profile=${CPD_PROFILE_NAME} \ --all
What to do next
- If you used self-signed certificates or CA certificates to securely connect between the Spark runtime and your resources, you need to add these certificates to the Spark truststore again after upgrading Analytics Engine powered by Apache Spark. For details, see Using a CA certificate to connect to internal servers from the platform.
- Analytics Engine powered by Apache Spark is ready to use. For details, see Extending analytics using Spark.