Installing and configuring software on the source cluster for backup and restore with IBM Fusion
A cluster administrator can install and configure the software that is needed on the source cluster for backup and restore with IBM Fusion.
Overview
IBM Fusion supports the following scenarios:
- Online backup and restore to the same cluster
- Online backup and restore to a different cluster
To set up backup and restore with IBM Fusion, do the following tasks:
- Set up a client workstation.
- Install IBM Fusion.
- Install the cpdbr service for IBM Fusion.
- If the Container Storage Interface (CSI) driver that you want to use to create volume snapshots does not have a volume snapshot class, create one.
1. Setting up a client workstation
To install IBM® Software Hub backup and restore utilities, you must have a client workstation that can connect to the Red Hat® OpenShift® Container Platform cluster.
- Who needs to complete this task?
- All administrators Any user who is involved in installing IBM Software Hub must have access to a client workstation.
- When do you need to complete this task?
- Repeat as needed You must have at least one client workstation. You can repeat the tasks in this section as many times as needed to set up multiple client workstations.
Before you install any software on the client workstation, ensure that the workstation meets the requirements in:
- Internet connection requirements
-
Some installation and upgrade tasks require a connection to the internet. If your cluster is in a restricted network, you can either:
- Move the workstation behind the firewall after you complete the tasks that require an internet connection.
- Prepare a client workstation that can connect to the internet and a client workstation that can connect to the cluster and transfer any files from the internet-connected workstation to the cluster-connected workstation.
When the workstation is connected to the internet, the workstation must be able to access the following sites:Site name Host name Description GitHub https://www.github.com/IBMThe CASE packages and the IBM Software Hub command-line interface are hosted on GitHub. If your company does not permit access toGitHub, contact IBM Support for help obtaining the IBM Software Hub command-line interface.
You can use the
--from_ocioption to pull CASE packages from the IBM Cloud Pak Open Container Initiative (OCI) registry.IBM Entitled Registry icr.iocp.icr.iodd0.icr.iodd2.icr.iodd4.icr.iodd6.icr.io
If you're located in China, you must also allow access to the following hosts:dd1-icr.ibm-zh.comdd3-icr.ibm-zh.comdd5-icr.ibm-zh.comdd7-icr.ibm-zh.com
The images for the IBM Software Hub software are hosted on the IBM Entitled Registry. You can pull the images directly from the IBM Entitled Registry or you can mirror the images to a private container registry.
To validate that you can connect to the IBM Entitled Registry, run the following command:
curl -v https://icr.ioThe command returns a message similar to:* Connected to icr.io (169.60.98.86) port 443 (#0)The IP address might be different.
Red Hat container image registry registry.redhat.ioThe images for Red Hat software are hosted on the Red Hat container image registry. You can pull the images directly from the Red Hat container image registry or you can mirror the images to a private container registry.
To validate that you can connect to the Red Hat container image registry, run the following command:
curl -v https://registry.redhat.ioThe command returns a message similar to:* Connected to registry.redhat.io (54.88.115.139) port 443The IP address might be different.
- Operating system requirements
-
The client workstation must be running a supported operating system:
Operating system x86-64 ppc64le s390x Notes Linux® ✓ ✓ ✓ Red Hat Enterprise Linux 8 or later is required. Mac OS ✓ Mac workstations with M3 and M4 chips are not supported. These chips support only ARM64 architecture. Windows ✓ To run on Windows, you must install Windows Subsystem for Linux. - Container runtime requirements
-
The workstation must have a supported container runtime.
Operating system Docker Podman Notes Linux ✓ ✓ Mac OS ✓ ✓ Windows ✓ ✓ Set up the container runtime inside Windows Subsystem for Linux.
1.1 Installing the IBM Software Hub command-line interface
To install IBM Software
Hub
software on your Red Hat
OpenShift Container Platform cluster, you
must install the IBM Software
Hub command-line
interface (cpd-cli) on the workstation from which you are running the
installation commands.
- Who needs to complete this task?
-
User Why you need the cpd-cliCluster administrator - Configure the image pull secret
- Change node settings
- Set up projects where IBM Software Hub will be installed
Instance administrator Install IBM Software Hub Registry administrator Mirror images to the private container registry. - When do you need to complete this task?
-
Repeat as needed You must complete this task on any workstation from which you plan to run installation commands.
You can also complete this task if you need to use the
cpd-clito complete other tasks, such as backing up and restoring your installation or managing users.
You must install the cpd-cli on a client
workstation that can connect to your cluster.
- Download the
cpd-clifrom theIBM/cpd-clirepository on GitHub.Ensure that you download the correct package based on the operating system on the client workstation:
Workstation operating system Enterprise Edition Standard Edition Linux The package that you download depends on your hardware: - x86_64
cpd-cli-linux-EE-14.3.0.tgz- ppc64le
cpd-cli-ppc64le-EE-14.3.0.tgz- s390x
cpd-cli-s390x-EE-14.3.0.tgz
The package that you download depends on your hardware: - x86_64
cpd-cli-linux-SE-14.3.0.tgz- ppc64le
cpd-cli-ppc64le-SE-14.3.0.tgz- s390x
cpd-cli-s390x-SE-14.3.0.tgz
Mac OS cpd-cli-darwin-EE-14.3.0.tgzcpd-cli-darwin-SE-14.3.0.tgzWindows You must download the Linux package and run it in Windows Subsystem for Linux:
cpd-cli-linux-EE-14.3.0.tgzYou must download the Linux package and run it in Windows Subsystem for Linux:
cpd-cli-linux-SE-14.3.0.tgz - Extract the contents of the package to the directory
where you want to run the
cpd-cli. - On Mac OS, you must trust the following components of the
cpd-cli:- cpd-cli
- plugins/lib/darwin/config
- plugins/lib/darwin/cpdbr
- plugins/lib/darwin/cpdbr-oadp
- plugins/lib/darwin/cpdctl
- plugins/lib/darwin/cpdtool
- plugins/lib/darwin/health
- plugins/lib/darwin/manage
- plugins/lib/darwin/platform-diag
- plugins/lib/darwin/platform-mgmt
For each component:- Right-click the component and select Open.
You will see a message with the following format:
macOS cannot verify the developer of component-name. Are you sure you want to open it?
- Click Open.
- Best practice Make the
cpd-cliexecutable from any directory.By default, you must either change to the directory where the
cpd-cliis located or specify the fully qualified path of thecpd-clito run the commands.However, you can make the
cpd-cliexecutable from any directory so that you only need to typecpd-cli command-nameto run the commands.Workstation operating system Details Linux Add the following line to your ~/.bashrc file: export PATH=<fully-qualified-path-to-the-cpd-cli>:$PATHMac OS Add the following line to your ~/.bash_profile or ~/.zshrc file: export PATH=<fully-qualified-path-to-the-cpd-cli>:$PATHWindows From the Windows Subsystem for Linux, add the following line to your ~/.bashrc file: export PATH=<fully-qualified-path-to-the-cpd-cli>:$PATH - Best practice
Determine whether you need to set any of the following environment variables for the
cpd-cli.CPD_CLI_MANAGE_WORKSPACE-
By default, the first time you run a
cpd-cli managecommand, thecpd-cliautomatically creates thecpd-cli-workspace/olm-utils-workspace/workdirectory.The location of the directory depends on several factors:
- If you made the
cpd-cliexecutable from any directory, the directory is created in the directory where you run thecpd-clicommands. - If you did not make the
cpd-cliexecutable from any directory, the directory is created in the directory where thecpd-cliis installed.
You can set the
CPD_CLI_MANAGE_WORKSPACEenvironment variable to override the default location.The
CPD_CLI_MANAGE_WORKSPACEenvironment variable is especially useful if you made thecpd-cliexecutable from any directory. When you set the environment variable, it ensures that the files are located in one directory.- Default value
- No default value. The directory is created based on the factors described in the preceding text.
- Valid values
- The fully qualified path where you want the
cpd-clito create theworkdirectory. For example, if you specify/root/cpd-cli/, thecpd-cli manageplug-in stores files in the/root/cpd-cli/workdirectory.
To set theCPD_CLI_MANAGE_WORKSPACEenvironment variable, run:export CPD_CLI_MANAGE_WORKSPACE=<fully-qualified-directory> - If you made the
OLM_UTILS_LAUNCH_ARGS-
You can use the
OLM_UTILS_LAUNCH_ARGSenvironment variable to mount certificates that thecpd-climust use in thecpd-clicontainer.- Mount CA certificates
-
Important: If you use a proxy server to mirror images or to download CASE packages, use the
OLM_UTILS_LAUNCH_ARGSenvironment variable to add the CA certificates to enable theolm-utilscontainer to trust connections through the proxy server. For more information, see Cannot access CASE packages when using a proxy server.You can mount CA certificates if you need to reach an external HTTPS endpoint that uses a self-signed certificate.
Tip: Typically the CA certificates are in the/etc/pki/ca-trustdirectory on the workstation. If you need additional information on adding certificates to a workstation, run:man update-ca-trustDetermine the correct argument for your environment:- If the certificates on the client workstation are in the
/etc/pki/ca-trustdirectory, the argument is:" -v /etc/pki/ca-trust:/etc/pki/ca-trust"
- If the certificates on the client workstation are in a different directory, replace
<ca-loc>with the appropriate location on the client workstation:" -v <ca-loc>:/etc/pki/ca-trust"
- If the certificates on the client workstation are in the
- Mount Kubernetes certificates
- You can mount Kubernetes certificates if you
need to use a certificate to connect to the Kubernetes API server.
The argument depends on the location of the certificates on the client workstation. Replace
<k8-loc>with the appropriate location on the client workstation:" -v <k8-loc>:/etc/k8scert --env K8S_AUTH_SSL_CA_CERT=/etc/k8scert"
- Default value
- No default value.
- Valid values
- The valid values depend on the arguments that you need to pass to the
OLM_UTILS_LAUNCH_ARGSenvironment variable.- To pass CA certificates, specify:
" -v <ca-loc>:/etc/pki/ca-trust"
- To pass Kubernetes certificates,
specify:
" -v <k8-loc>:/etc/k8scert --env K8S_AUTH_SSL_CA_CERT=/etc/k8scert"
- To pass both CA certificates and Kubernetes
certificates, specify:
" -v <ca-loc>:/etc/pki/ca-trust -v <k8-loc>:/etc/k8scert --env K8S_AUTH_SSL_CA_CERT=/etc/k8scert"
- To pass CA certificates, specify:
To set theOLM_UTILS_LAUNCH_ARGSenvironment variable, run:export OLM_UTILS_LAUNCH_ARGS=" <arguments>"
Important: If you set either of these environment variables, ensure that you add them to your installation environment variables script. - Run the following command to ensure
that the
cpd-cliis installed and running and that thecpd-cli manageplug-in has the correct version of theolm-utils-v4image.cpd-cli manage restart-container
The cpd-cli-workspace directory includes the following
sub-directories:
| Directory | What is stored in the directory? |
|---|---|
olm-utils-workspace/work |
|
olm-utils-workspace/work/offline |
The contents of this directory are organized by release. For example, if you download the
CASE packages for Version 5.3.0, the packages are stored in the 5.3.0 directory. In addition, the output of commands such as
the |
1.2 Installing the OpenShift command-line interface
The IBM Software
Hub command-line interface (cpd-cli) interacts with the OpenShift command-line interface
(oc CLI) to issue commands to your Red Hat
OpenShift Container Platform cluster.
- Who needs to complete this task?
- All administrators Any users who are completing IBM Software Hub installation tasks must install the OpenShift CLI.
- When do you need to complete this task?
- Repeat as needed You must complete this task on any workstation from which you plan to run installation commands.
You must install a version of the OpenShift CLI that is compatible with your Red Hat OpenShift Container Platform cluster.
To install the OpenShift CLI, follow the appropriate guidance for your cluster:
- Self-managed clusters
-
Install the version of the
ocCLI that corresponds to the version of Red Hat OpenShift Container Platform that you are running. For details, see Getting started with the OpenShift CLI in the Red Hat OpenShift Container Platform documentation: - Managed OpenShift clusters
-
Follow the appropriate guidance for your managed OpenShift environment.
OpenShift environment Installation instructions IBM Cloud Satellite See Installing the CLI in the IBM Cloud Satellite documentation. Red Hat OpenShift on IBM Cloud See Installing the CLI in the IBM Cloud documentation. Azure Red Hat OpenShift (ARO) See Install the OpenShift CLI in the Azure Red Hat OpenShift documentation Red Hat OpenShift Service on AWS (ROSA) See Installing the OpenShift CLI in the Red Hat OpenShift Service on AWS documentation. Red Hat OpenShift Dedicated on Google Cloud See Learning how to use the command-line tools for Red Hat OpenShift Dedicated in the Red Hat OpenShift Dedicated documentation.
2. Installing IBM Fusion on the source cluster
To back up and restore a IBM Software Hub deployment with IBM Fusion, install the IBM Fusion backup and restore service. Install a supported version of IBM Fusion:
- Version 2.11.x with the latest hotfix
- Version 2.12.x
For details, see the IBM Fusion documentation.
3. Installing the cpdbr service for IBM Fusion integration on the source cluster
To enable backup and restore with IBM Fusion, install the cpdbr service on the source cluster.
- Who needs to complete this task?
- A cluster administrator must complete this task.
- When do you need to complete this task?
- You must complete this task on the source cluster before you run backup jobs in IBM Fusion. If you are restoring IBM Software Hub to a different cluster, complete this task on the target cluster before you run restore jobs in IBM Fusion.
3.1 Creating environment variables
Create the following environment variables so that you can copy commands from the documentation and run them without making any changes.
| Environment variable | Description |
|---|---|
OC_LOGIN |
Shortcut for the oc login command. |
PROJECT_CPD_INST_OPERATORS |
The project (namespace) where the IBM Software Hub instance operators are installed. |
PROJECT_SCHEDULING_SERVICE |
The project where the scheduling service is installed. This environment variable is needed only when the scheduling service is installed. |
PRIVATE_REGISTRY_LOCATION |
If your cluster is in a restricted network, the private container registry where
the ubi-minimal image is mirrored. Tip: The
ubi-minimal image is needed to run backup and restore
commands.
|
OADP_PROJECT |
The project where the OADP operator is installed. Note: For backup and restore with IBM Fusion, the project where
the OADP operator is
installed is ibm-backup-restore.
|
CPFS_OADP_PLUGIN_VERSION |
The image version for the IBM Cloud Pak foundational services
OADP plugin.
|
VERSION |
The version of IBM Software
Hub
that is running on your cluster. Use the following value:
|
The workstation can connect to the internet and to the private container registry
Ensure that you source the environment variables before you run the commands in this task.
- Ensure that Docker or Podman is running on the workstation.
- Log in to the private container registry:
- Podman
-
podman login ${PRIVATE_REGISTRY_LOCATION} \ -u ${PRIVATE_REGISTRY_PUSH_USER} \ -p ${PRIVATE_REGISTRY_PUSH_PASSWORD} - Docker
-
docker login ${PRIVATE_REGISTRY_LOCATION} \ -u ${PRIVATE_REGISTRY_PUSH_USER} \ -p ${PRIVATE_REGISTRY_PUSH_PASSWORD}
- Run the following commands to mirror the images to the private container registry:
db2u-velero-plugin-
cpd-cli manage copy-image \ --from=icr.io/db2u/db2u-velero-plugin:${VERSION} \ --to=${PRIVATE_REGISTRY_LOCATION}/db2u/db2u-velero-plugin:${VERSION}
The workstation cannot connect to the private container registry at the same time
Ensure that you source the environment variables before you run the commands in this task.
- From a workstation that can connect to the internet:
- Ensure that Docker or Podman is running on the workstation.
- Ensure that the
olm-utils-v4image is available on the client workstation:cpd-cli manage restart-container - Run the appropriate commands to save the images to the client workstation. The command you run
depends on the version of IBM Software
Hub that is
installed.
db2u-velero-plugin-
cpd-cli manage save-image \ --from=icr.io/db2u/db2u-velero-plugin:${VERSION}
- Transfer the compressed TAR files to a client workstation that can connect
to the cluster.
Ensure that you place the TAR files in the
work/offlinedirectory:db2u-velero-pluginicr.io_db2u_db2u-velero-plugin_$VERSION}.tar.gz
- From the workstation that can connect to the cluster:
- Ensure that Docker or Podman is running on the workstation.
- Log in to the private container registry.
The following command assumes that you are using private container registry that is secured with credentials:
cpd-cli manage login-private-registry \ ${PRIVATE_REGISTRY_LOCATION} \ ${PRIVATE_REGISTRY_PUSH_USER} \ ${PRIVATE_REGISTRY_PUSH_PASSWORD}If your private registry is not secured, omit the username and password.
- Run the following command to copy the images to the private container registry:
db2u-velero-plugin-
cpd-cli manage copy-image \ --from=icr.io/db2u/db2u-velero-plugin:${VERSION} \ --to=${PRIVATE_REGISTRY_LOCATION}/db2u/db2u-velero-plugin:${VERSION}
3.3 Configuring the Data Protection Application
Complete the following steps to configure the Data Protection Application (DPA) for IBM Software Hub:
- Run the following command increase the ephemeral storage and memory limits for the
Velero pods:
oc patch dpa \ -n ibm-backup-restore velero \ --type merge \ -p '{"spec": {"configuration": {"velero": {"podConfig": {"resourceAllocations": {"limits": {"ephemeral-storage": "2Gi", "memory": "4Gi"}}}}}}}' - Configure the plugins required by IBM Software
Hub:
DPA="velero" NAMESPACE="ibm-backup-restore" add_plugin_if_missing() { local PLUGIN_NAME="$1" local PLUGIN_IMAGE="$2" local EXPECTED_IMAGE_VERSION="${PLUGIN_IMAGE##*:}" echo -e "\nChecking plugin: $PLUGIN_NAME" EXISTING_IMAGE=$(oc get dpa "$DPA" -n "$NAMESPACE" -o json | jq -r \ --arg name "$PLUGIN_NAME" '.spec.configuration.velero.customPlugins[]? | select(.name == $name) | .image') if [ -z "$EXPECTED_IMAGE_VERSION" ]; then echo "version tag must be defined, current arg=$PLUGIN_IMAGE" return 1 fi EXISTING_IMAGE_INDEX=$(oc get dpa "$DPA" -n "$NAMESPACE" -o json | jq -r \ --arg name "$PLUGIN_NAME" '[.spec.configuration.velero.customPlugins[]? | .name] | index($name)') INSTALLED_IMAGE_VERSION="${EXISTING_IMAGE##*:}" # Gets everything after the last ":" if [ -n "$EXISTING_IMAGE" ]; then if [ "$INSTALLED_IMAGE_VERSION" = "$EXPECTED_IMAGE_VERSION" ]; then echo "Plugin '$PLUGIN_NAME' already exists with image: $EXISTING_IMAGE and in correct version, aborting DPA update" else echo "Plugin '$PLUGIN_NAME' needs to be patched. Patching DPA customPlugins with image $PLUGIN_IMAGE ..." oc patch dpa "$DPA" -n "$NAMESPACE" --type='json' \ -p='[{"op":"replace","path":"/spec/configuration/velero/customPlugins/'$EXISTING_IMAGE_INDEX'","value":{"name":"'"$PLUGIN_NAME"'","image":"'"$PLUGIN_IMAGE"'"}}]' if [ $? -ne 0 ]; then echo "ERROR: oc patch command failed" return 1 fi fi else echo "Plugin '$PLUGIN_NAME' not found. Adding DPA customPlugins with image $PLUGIN_IMAGE ..." oc patch dpa "$DPA" -n "$NAMESPACE" --type='json' -p='[{"op":"add","path":"/spec/configuration/velero/customPlugins/-","value":{"name":"'"$PLUGIN_NAME"'","image":"'"$PLUGIN_IMAGE"'"}}]' if [ $? -ne 0 ]; then echo "ERROR: oc patch command failed" return 1 fi fi } add_plugin_if_missing "cpfs-oadp-plugin" "icr.io/cpopen/cpfs/cpfs-oadp-plugins:${CPFS_OADP_PLUGIN_VERSION}" add_plugin_if_missing "db2u-velero-plugin" "icr.io/db2u/db2u-velero-plugin:${VERSION}" add_plugin_if_missing "swhub-velero-plugin" "icr.io/cpopen/cpd/swhub-velero-plugin:${VERSION}"
3.4 Optional: Enabling parallel hook processing
If you install IBM Fusion version 2.11 or later, you can configure IBM Software Hub to run multiple service hooks simultaneously instead of sequentially.
You can enable parallel processing of backup hooks by setting the
hook-parallel-workers parameter in the IBM Fusion
guardian-configmap ConfigMap. By default, this value is treated as
0, which means hooks are executed sequentially. The maximum number of
hook-parallel-workers that you can set is 35.
For more information about parallel hook processing, see Parallel hook execution in the IBM Fusion documentation.
-
To set the number of hook-parallel-workers, use the following command:
oc patch cm guardian-configmap -n ibm-backup-restore --patch '{"data":{"hook-parallel-workers": "3"}}' -
To clear the hook-parallel-workers value, use the following command:
oc patch cm guardian-configmap -n ibm-backup-restore --type='json' -p='[{"op": "remove", "path": "/data/hook-parallel-workers"}]' -
To check the hook-parallel-workers value, use the following command:
oc get cm guardian-configmap -n ibm-backup-restore -o yaml | yq '.data.hook-parallel-workers'
3.5 Installing the cpdbr service
Install the cpdbr service on the source cluster by doing the following steps.
-
The cpdbr service contains scripts that invoke IBM Software Hub backup and restore hooks. Additionally, backup and restore recipes are installed in the IBM Software Hub operators project. IBM Fusion automatically detects these recipes and updates the corresponding application with all associated tethered projects and required environment variables.
-
It is recommended that you install the latest version of the cpdbr service. If you previously installed the service, upgrade the service by doing the upgrade steps.
- To install the cpdbr service on the source cluster, do the
following steps.
- Log in to Red Hat
OpenShift Container Platform as a cluster
administrator.
${OC_LOGIN}Remember:OC_LOGINis an alias for theoc logincommand. - Configure the client to set the OADP
project:
cpd-cli oadp client config set namespace=${OADP_PROJECT} - In the IBM Software
Hub operators
project, install the cpdbr service.
The cluster pulls images from the IBM Entitled Registry
- Environments with the scheduling service
-
cpd-cli oadp install \ --component=cpdbr-tenant \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --cpdbr-hooks-image-prefix=icr.io/cpopen/cpd \ --cpfs-image-prefix=icr.io/cpopen/cpfs \ --namespace=${OADP_PROJECT} \ --cpd-scheduler-namespace=${PROJECT_SCHEDULING_SERVICE} \ --log-level=debug \ --verboseTip: If the cloud object store for the IBM Fusion backup service uses a CA certificate, you can use the--backup-validation-cacertoption to provide the base64 encoded certificate. - Environments without the scheduling service
-
cpd-cli oadp install \ --component=cpdbr-tenant \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --cpdbr-hooks-image-prefix=icr.io/cpopen/cpd \ --cpfs-image-prefix=icr.io/cpopen/cpfs \ --namespace=${OADP_PROJECT} \ --log-level=debug \ --verboseTip: If the cloud object store for the IBM Fusion backup service uses a CA certificate, you can use the--backup-validation-cacertoption to provide the base64 encoded certificate.
The cluster pulls images from a private container registry
Note: Before you can install images from a private container registry, check that an image content source policy was configured. For details, see Configuring an image content source policy for IBM Software Hub images.- Environments with the scheduling service
-
cpd-cli oadp install \ --component=cpdbr-tenant \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --cpdbr-hooks-image-prefix=${PRIVATE_REGISTRY_LOCATION}/cpopen/cpd \ --cpfs-image-prefix=${PRIVATE_REGISTRY_LOCATION}/cpopen/cpfs \ --namespace=${OADP_PROJECT} \ --cpd-scheduler-namespace=${PROJECT_SCHEDULING_SERVICE} \ --log-level=debug \ --verboseTip: If the cloud object store for the IBM Fusion backup service uses a CA certificate, you can use the--backup-validation-cacertoption to provide the base64 encoded certificate. - Environments without the scheduling service
-
cpd-cli oadp install \ --component=cpdbr-tenant \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --cpdbr-hooks-image-prefix=${PRIVATE_REGISTRY_LOCATION}/cpopen/cpd \ --cpfs-image-prefix=${PRIVATE_REGISTRY_LOCATION}/cpopen/cpfs \ --namespace=${OADP_PROJECT} \ --log-level=debug \ --verboseTip: If the cloud object store for the IBM Fusion backup service uses a CA certificate, you can use the--backup-validation-cacertoption to provide the base64 encoded certificate.
- Optional: Verify that the cpdbr-tenant service pod is
running:
oc get pod -n ${PROJECT_CPD_INST_OPERATORS} | grep cpdbr -
- Configure the
ibmcpd-tenantparent recipe:cpd-cli oadp generate plan fusion parent-recipe \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --verbose \ --log-level=debug - Verify that the
ibmcpd-tenantIBM Fusion recipe version is the same as the IBM Software Hub version:oc get frcpe \ -n ${PROJECT_CPD_INST_OPERATORS} \ -l icpdsupport/generated-by-cpdbr=true,icpdsupport/version=${VERSION}
- Configure the
- If the IBM Software
Hub
scheduling service is installed,
verify that the following recipe was
installed:
oc get frcpe ibmcpd-scheduler \ -n ${PROJECT_SCHEDULING_SERVICE} - Verify that the appropriate version of the
cpdbr service was
installed:
oc exec -n ${PROJECT_CPD_INST_OPERATORS} $(oc get -n ${PROJECT_CPD_INST_OPERATORS} po -l component=cpdbr-tenant,icpdsupport/app=br-service -o name) -- /cpdbr-scripts/cpdbr-oadp versionExample output of the command:Client: Version: 5.3.0 Build Date: <timestamp> Build Number: <xxx>
- Log in to Red Hat
OpenShift Container Platform as a cluster
administrator.
- To upgrade
the cpdbr service on the source cluster, do the following steps.
- Log in to Red Hat
OpenShift Container Platform as a cluster
administrator.
${OC_LOGIN}Remember:OC_LOGINis an alias for theoc logincommand. - In the IBM Software
Hub operators
project, upgrade the cpdbr service.
The cluster pulls images from the IBM Entitled Registry
- Environments with the scheduling service
-
cpd-cli oadp install \ --upgrade=true \ --component=cpdbr-tenant \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --cpfs-image-prefix=icr.io/cpopen/cpfs \ --cpd-scheduler-namespace=${PROJECT_SCHEDULING_SERVICE} \ --namespace=${OADP_PROJECT} \ --log-level=debug \ --verboseTip: If the cloud object store for the IBM Fusion backup service uses a CA certificate, you can use the--backup-validation-cacertoption to provide the base64 encoded certificate. - Environments without the scheduling service
-
cpd-cli oadp install \ --upgrade=true \ --component=cpdbr-tenant \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --cpfs-image-prefix=icr.io/cpopen/cpfs \ --namespace=${OADP_PROJECT} \ --log-level=debug \ --verboseTip: If the cloud object store for the IBM Fusion backup service uses a CA certificate, you can use the--backup-validation-cacertoption to provide the base64 encoded certificate.
The cluster pulls images from a private container registry
Note: Before you can install images from a private container registry, check that an image content source policy was configured. For details, see Configuring an image content source policy for IBM Software Hub images.- Environments with the scheduling service
-
cpd-cli oadp install \ --upgrade=true \ --component=cpdbr-tenant \ --namespace=${OADP_PROJECT} \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --cpfs-image-prefix=${PRIVATE_REGISTRY_LOCATION}/cpopen/cpfs \ --cpd-scheduler-namespace=${PROJECT_SCHEDULING_SERVICE} \ --log-level=debug \ --verboseTip: If the cloud object store for the IBM Fusion backup service uses a CA certificate, you can use the--backup-validation-cacertoption to provide the base64 encoded certificate. - Environments without the scheduling service
-
cpd-cli oadp install \ --upgrade=true \ --component=cpdbr-tenant \ --namespace=${OADP_PROJECT} \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --cpfs-image-prefix=${PRIVATE_REGISTRY_LOCATION}/cpopen/cpfs \ --log-level=debug \ --verboseTip: If the cloud object store for the IBM Fusion backup service uses a CA certificate, you can use the--backup-validation-cacertoption to provide the base64 encoded certificate.
- Optional: Verify that the cpdbr-tenant service pod is
running:
oc get pod -n ${PROJECT_CPD_INST_OPERATORS} | grep cpdbr - Verify the version of the cpdbr
service:
oc exec -n ${PROJECT_CPD_INST_OPERATORS} $(oc get -n ${PROJECT_CPD_INST_OPERATORS} po -l component=cpdbr-tenant,icpdsupport/app=br-service -o name) -- /cpdbr-scripts/cpdbr-oadp versionThe command returns output with the following format:Client: Version: 5.3.0 Build Date: <timestamp> Build Number: <xxx> -
- Configure the
ibmcpd-tenantparent recipe:cpd-cli oadp generate plan fusion parent-recipe \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --verbose \ --log-level=debug - Verify that the
ibmcpd-tenantIBM Fusion recipe version is the same as the IBM Software Hub version:oc get frcpe \ -n ${PROJECT_CPD_INST_OPERATORS} \ -l icpdsupport/generated-by-cpdbr=true,icpdsupport/version=${VERSION}
- Configure the
- Log in to Red Hat
OpenShift Container Platform as a cluster
administrator.
When the cpdbr service is installed on the source cluster, the cpdbr component services are deployed and required permissions and cluster role bindings (ClusterRole, ClusterRoleBinding, Role, RoleBinding) are created.
4. Creating volume snapshot classes on the source cluster
By default, VolumeSnapshotClass has the deletionPolicy set to
Delete. This policy is recommended for IBM Fusion because the
VolumeSnapshot and VolumeSnapshotContent objects are not necessary
for the protection of backups. These objects might also cause unnecessary storage usage on the
cluster. For more information, see Deleting a volume snapshot in the Red Hat
OpenShift documentation.
For more information about IBM Fusion requirements, see System requirements in the IBM Fusion documentation.
- Log in to Red Hat
OpenShift Container Platform as a cluster
administrator.
${OC_LOGIN}Remember:OC_LOGINis an alias for theoc logincommand. - If you are backing up IBM Software
Hub on Red Hat
OpenShift Data Foundation storage, create the
following volume snapshot
classes:
cat << EOF | oc apply -f - apiVersion: snapshot.storage.k8s.io/v1 deletionPolicy: Delete driver: openshift-storage.rbd.csi.ceph.com kind: VolumeSnapshotClass metadata: name: ocs-storagecluster-rbdplugin-snapclass-velero labels: velero.io/csi-volumesnapshot-class: "true" parameters: clusterID: openshift-storage csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage EOFcat << EOF | oc apply -f - apiVersion: snapshot.storage.k8s.io/v1 deletionPolicy: Delete driver: openshift-storage.cephfs.csi.ceph.com kind: VolumeSnapshotClass metadata: name: ocs-storagecluster-cephfsplugin-snapclass-velero labels: velero.io/csi-volumesnapshot-class: "true" parameters: clusterID: openshift-storage csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage EOF - If you are backing up IBM Software
Hub on IBM Storage Scale storage, create the
following volume snapshot
class:
cat << EOF | oc apply -f - apiVersion: snapshot.storage.k8s.io/v1 deletionPolicy: Delete driver: spectrumscale.csi.ibm.com kind: VolumeSnapshotClass metadata: name: ibm-spectrum-scale-snapshot-class labels: velero.io/csi-volumesnapshot-class: "true" EOF - If you are backing up IBM Software
Hub on Portworx storage, create the following
volume snapshot
class:
cat << EOF | oc apply -f - apiVersion: snapshot.storage.k8s.io/v1 deletionPolicy: Delete driver: pxd.portworx.com kind: VolumeSnapshotClass metadata: name: px-csi-snapclass-velero labels: velero.io/csi-volumesnapshot-class: "true" EOF - If you are backing up
IBM Software
Hub on NetApp Trident storage, create the following
volume snapshot
class:
cat << EOF | oc apply -f - apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-snapclass labels: velero.io/csi-volumesnapshot-class: "true" driver: csi.trident.netapp.io deletionPolicy: Delete EOF