Installing and configuring software on the source cluster for backup and restore with IBM Fusion
A cluster administrator or instance administrator can install and configure the software that is needed on the source cluster for backup and restore with IBM Fusion.
Overview
IBM Fusion supports the following scenarios:
- Online backup and restore to the same cluster
- Online backup and restore to a different cluster
To set up backup and restore with IBM Fusion, do the following tasks:
- Set up a client workstation.
- Install IBM Fusion.
- Install the cpdbr service for IBM Fusion.
- If the Container Storage Interface (CSI) driver that you want to use to create volume snapshots does not have a volume snapshot class, create one.
1. Setting up a client workstation
To install IBM® Software Hub backup and restore utilities, you must have a client workstation that can connect to the Red Hat® OpenShift® Container Platform cluster.
- Who needs to complete this task?
- All administrators Any user who is involved in installing IBM Software Hub must have access to a client workstation.
- When do you need to complete this task?
- Repeat as needed You must have at least one client workstation. You can repeat the tasks in this section as many times as needed to set up multiple client workstations.
Before you install any software on the client workstation, ensure that the workstation meets the requirements in:
- Internet connection requirements
-
Some installation and upgrade tasks require a connection to the internet. If your cluster is in a restricted network, you can either:
- Move the workstation behind the firewall after you complete the tasks that require an internet connection.
- Prepare a client workstation that can connect to the internet and a client workstation that can connect to the cluster and transfer any files from the internet-connected workstation to the cluster-connected workstation.
When the workstation is connected to the internet, the workstation must be able to access the following sites:Site name Host name Description GitHub https://www.github.com/IBM
The CASE packages and the IBM Software Hub command-line interface are hosted on GitHub. If your company does not permit access toGitHub, contact IBM Support for help obtaining the IBM Software Hub command-line interface.
You can use the
--from_oci
option to pull CASE packages from the IBM Cloud Pak Open Container Initiative (OCI) registry.IBM Entitled Registry icr.io
cp.icr.io
dd0.icr.io
dd2.icr.io
dd4.icr.io
dd6.icr.io
If you're located in China, you must also allow access to the following hosts:dd1-icr.ibm-zh.com
dd3-icr.ibm-zh.com
dd5-icr.ibm-zh.com
dd7-icr.ibm-zh.com
The images for the IBM Software Hub software are hosted on the IBM Entitled Registry. You can pull the images directly from the IBM Entitled Registry or you can mirror the images to a private container registry.
To validate that you can connect to the IBM Entitled Registry, run the following command:
curl -v https://icr.io
The command returns a message similar to:* Connected to icr.io (169.60.98.86) port 443 (#0)
The IP address might be different.
- Operating system requirements
-
The client workstation must be running a supported operating system:
Operating system Notes Linux® Mac OS Windows To run on Windows, you must install Windows Subsystem for Linux. - Container runtime requirements
-
The workstation must have a supported container runtime.
Operating system Docker Podman Notes Linux ✓ ✓ Mac OS ✓ ✓ Windows ✓ ✓ Set up the container runtime inside Windows Subsystem for Linux.
1.1 Installing the IBM Software Hub command-line interface
To install IBM Software Hub software on your
Red Hat
OpenShift Container Platform cluster, you must install the
IBM Software Hub command-line interface
(cpd-cli
) on the workstation from which you are running the installation
commands.
- Who needs to complete this task?
-
User Why you need the cpd-cli
Cluster administrator - Configure the image pull secret
- Change node settings
- Set up projects where IBM Software Hub will be installed
Instance administrator Install IBM Software Hub Registry administrator Mirror images to the private container registry. - When do you need to complete this task?
-
Repeat as needed You must complete this task on any workstation from which you plan to run installation commands.
You can also complete this task if you need to use the
cpd-cli
to complete other tasks, such as backing up and restoring your installation or managing users.
You must install the cpd-cli
on a client
workstation that can connect to your cluster.
- Download Version 14.1.3 of the
cpd-cli
from theIBM/cpd-cli
repository on GitHub.Ensure that you download the correct package based on the operating system on the client workstation:
Workstation operating system Enterprise Edition Standard Edition Linux The package that you download depends on your hardware: - x86_64
cpd-cli-linux-EE-14.1.3.tgz
- ppc64le
cpd-cli-ppc64le-EE-14.1.3.tgz
- s390x
cpd-cli-s390x-EE-14.1.3.tgz
The package that you download depends on your hardware: - x86_64
cpd-cli-linux-SE-14.1.3.tgz
- ppc64le
cpd-cli-ppc64le-SE-14.1.3.tgz
- s390x
cpd-cli-s390x-SE-14.1.3.tgz
Mac OS cpd-cli-darwin-EE-14.1.3.tgz
cpd-cli-darwin-SE-14.1.3.tgz
Windows You must download the Linux package and run it in Windows Subsystem for Linux:
cpd-cli-linux-EE-14.1.3.tgz
You must download the Linux package and run it in Windows Subsystem for Linux:
cpd-cli-linux-SE-14.1.3.tgz
- Extract the contents of the package to the directory
where you want to run the
cpd-cli
. - On Mac OS, you must trust the following components of the
cpd-cli
:- cpd-cli
- plugins/lib/darwin/config
- plugins/lib/darwin/cpdbr
- plugins/lib/darwin/cpdbr-oadp
- plugins/lib/darwin/cpdctl
- plugins/lib/darwin/cpdtool
- plugins/lib/darwin/health
- plugins/lib/darwin/manage
- plugins/lib/darwin/platform-diag
- plugins/lib/darwin/platform-mgmt
For each component:- Right-click the component and select Open.
You will see a message with the following format:
macOS cannot verify the developer of component-name. Are you sure you want to open it?
- Click Open.
- Best practice Make the
cpd-cli
executable from any directory.By default, you must either change to the directory where the
cpd-cli
is located or specify the fully qualified path of thecpd-cli
to run the commands.However, you can make the
cpd-cli
executable from any directory so that you only need to typecpd-cli command-name
to run the commands.Workstation operating system Details Linux Add the following line to your ~/.bashrc file: export PATH=<fully-qualified-path-to-the-cpd-cli>:$PATH
Mac OS Add the following line to your ~/.bash_profile or ~/.zshrc file: export PATH=<fully-qualified-path-to-the-cpd-cli>:$PATH
Windows From the Windows Subsystem for Linux, add the following line to your ~/.bashrc file: export PATH=<fully-qualified-path-to-the-cpd-cli>:$PATH
- Best practice
Determine whether you need to set any of the following environment variables for the
cpd-cli
.CPD_CLI_MANAGE_WORKSPACE
-
By default, the first time you run a
cpd-cli manage
command, thecpd-cli
automatically creates thecpd-cli-workspace/olm-utils-workspace/work
directory.The location of the directory depends on several factors:
- If you made the
cpd-cli
executable from any directory, the directory is created in the directory where you run thecpd-cli
commands. - If you did not make the
cpd-cli
executable from any directory, the directory is created in the directory where thecpd-cli
is installed.
You can set the
CPD_CLI_MANAGE_WORKSPACE
environment variable to override the default location.The
CPD_CLI_MANAGE_WORKSPACE
environment variable is especially useful if you made thecpd-cli
executable from any directory. When you set the environment variable, it ensures that the files are located in one directory.- Default value
- No default value. The directory is created based on the factors described in the preceding text.
- Valid values
- The fully qualified path where you want the
cpd-cli
to create thework
directory. For example, if you specify/root/cpd-cli/
, thecpd-cli manage
plug-in stores files in the/root/cpd-cli/work
directory.
To set theCPD_CLI_MANAGE_WORKSPACE
environment variable, run:export CPD_CLI_MANAGE_WORKSPACE=<fully-qualified-directory>
- If you made the
OLM_UTILS_LAUNCH_ARGS
-
You can use the
OLM_UTILS_LAUNCH_ARGS
environment variable to mount certificates that thecpd-cli
must use in thecpd-cli
container.- Mount CA certificates
-
Important: If you use a proxy server to mirror images or to download CASE packages, use the
OLM_UTILS_LAUNCH_ARGS
environment variable to add the CA certificates to enable theolm-utils
container to trust connections through the proxy server. For more information, see Cannot access CASE packages when using a proxy server.You can mount CA certificates if you need to reach an external HTTPS endpoint that uses a self-signed certificate.
Tip: Typically the CA certificates are in the/etc/pki/ca-trust
directory on the workstation. If you need additional information on adding certificates to a workstation, run:man update-ca-trust
Determine the correct argument for your environment:- If the certificates on the client workstation are in the
/etc/pki/ca-trust
directory, the argument is:" -v /etc/pki/ca-trust:/etc/pki/ca-trust"
- If the certificates on the client workstation are in a different directory, replace
<ca-loc>
with the appropriate location on the client workstation:" -v <ca-loc>:/etc/pki/ca-trust"
- If the certificates on the client workstation are in the
- Mount Kubernetes certificates
- You can mount Kubernetes certificates if you
need to use a certificate to connect to the Kubernetes API server.
The argument depends on the location of the certificates on the client workstation. Replace
<k8-loc>
with the appropriate location on the client workstation:" -v <k8-loc>:/etc/k8scert --env K8S_AUTH_SSL_CA_CERT=/etc/k8scert"
- Default value
- No default value.
- Valid values
- The valid values depend on the arguments that you need to pass to the
OLM_UTILS_LAUNCH_ARGS
environment variable.- To pass CA certificates, specify:
" -v <ca-loc>:/etc/pki/ca-trust"
- To pass Kubernetes certificates,
specify:
" -v <k8-loc>:/etc/k8scert --env K8S_AUTH_SSL_CA_CERT=/etc/k8scert"
- To pass both CA certificates and Kubernetes
certificates, specify:
" -v <ca-loc>:/etc/pki/ca-trust -v <k8-loc>:/etc/k8scert --env K8S_AUTH_SSL_CA_CERT=/etc/k8scert"
- To pass CA certificates, specify:
To set theOLM_UTILS_LAUNCH_ARGS
environment variable, run:export OLM_UTILS_LAUNCH_ARGS=" <arguments>"
Important: If you set either of these environment variables, ensure that you add them to your installation environment variables script. - Run the following command to ensure
that the
cpd-cli
is installed and running and that thecpd-cli manage
plug-in has the latest version of theolm-utils
image.cpd-cli manage restart-container
The cpd-cli-workspace
directory includes the following
sub-directories:
Directory | What is stored in the directory? |
---|---|
olm-utils-workspace/work |
|
olm-utils-workspace/work/offline |
The contents of this directory are organized by release. For example, if you download the
CASE packages for Version 5.1.3, the packages are stored in the 5.1.3 directory. In addition, the output of commands such as
the |
1.2 Installing the OpenShift command-line interface
The IBM Software Hub command-line interface (cpd-cli
) interacts with the OpenShift command-line interface
(oc
CLI) to issue commands to your Red Hat
OpenShift Container Platform cluster.
- Who needs to complete this task?
- All administrators Any users who are completing IBM Software Hub installation tasks must install the OpenShift CLI.
- When do you need to complete this task?
- Repeat as needed You must complete this task on any workstation from which you plan to run installation commands.
You must install a version of the OpenShift CLI that is compatible with your Red Hat OpenShift Container Platform cluster.
To install the OpenShift CLI, follow the appropriate guidance for your cluster:
- Self-managed clusters
-
Install the version of the
oc
CLI that corresponds to the version of Red Hat OpenShift Container Platform that you are running. For details, see Getting started with the OpenShift CLI in the Red Hat OpenShift Container Platform documentation: - Managed OpenShift clusters
-
Follow the appropriate guidance for your managed OpenShift environment.
OpenShift environment Installation instructions IBM Cloud Satellite See Installing the CLI in the IBM Cloud Satellite documentation. Red Hat OpenShift on IBM Cloud See Installing the CLI in the IBM Cloud documentation. Azure Red Hat OpenShift (ARO) See Install the OpenShift CLI in the Azure Red Hat OpenShift documentation Red Hat OpenShift Service on AWS (ROSA) See Installing the OpenShift CLI in the Red Hat OpenShift Service on AWS documentation. Red Hat OpenShift Dedicated on Google Cloud See Learning how to use the command-line tools for Red Hat OpenShift Dedicated in the Red Hat OpenShift Dedicated documentation.
2. Installing IBM Fusion on the source cluster
To back up and restore a IBM Software Hub deployment with IBM Fusion, install one of the following versions.
- IBM Fusion Version 2.8.2 with the latest hotfix or later fixes
- IBM Fusion Version 2.9.0 with the latest hotfix or later fixes (Recommended)
For details, see the IBM Fusion documentation.
3. Installing the cpdbr service for IBM Fusion integration on the source cluster
To enable backup and restore with IBM Fusion, install the cpdbr service on the source cluster.
- Who needs to complete this task?
- A cluster administrator must complete this task.
- When do you need to complete this task?
- You must complete this task on the source cluster before you run backup jobs in IBM Fusion. If you are restoring IBM Software Hub to a different cluster, complete this task on the target cluster before you run restore jobs in IBM Fusion.
3.1 Creating environment variables
Create the following environment variables so that you can copy commands from the documentation and run them without making any changes.
Environment variable | Description |
---|---|
OC_LOGIN |
Shortcut for the oc login command. |
PROJECT_CPD_INST_OPERATORS |
The project (namespace) where the IBM Software Hub instance operators are installed. |
PROJECT_SCHEDULING_SERVICE |
The project where the scheduling service is installed. This environment variable is needed only when the scheduling service is installed. |
PRIVATE_REGISTRY_LOCATION |
If your cluster is in a restricted network, the private container registry where the
ubi-minimal image is mirrored. Tip: The
ubi-minimal image is needed to run backup and restore commands.
|
OADP_PROJECT |
The project where the OADP
operator is installed. Note: For backup and restore with IBM Fusion, the project where the OADP operator is installed is
ibm-backup-restore.
|
3.2 Installing the cpdbr service
Install the cpdbr service on the source cluster by doing the following steps.
-
The cpdbr service contains scripts that invoke IBM Software Hub backup and restore hooks. Additionally, backup and restore recipes are installed in the IBM Software Hub operators project. IBM Fusion automatically detects these recipes and updates the corresponding application with all associated tethered projects and required environment variables.
-
It is recommended that you install the latest version of the cpdbr service. If you previously installed the service, upgrade the service by doing the upgrade steps.
- To install the cpdbr service on the source cluster, do the following steps.
- Log in to Red Hat
OpenShift Container Platform as a cluster
administrator.
${OC_LOGIN}
Remember:OC_LOGIN
is an alias for theoc login
command. - Configure the client to set the OADP
project:
cpd-cli oadp client config set namespace=${OADP_PROJECT}
- In the IBM Software Hub operators project,
install the cpdbr service.
The cluster pulls images from the IBM Entitled Registry
- Environments with the scheduling service
-
cpd-cli oadp install \ --component=cpdbr-tenant \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --cpdbr-hooks-image-prefix=icr.io/cpopen/cpd \ --cpfs-image-prefix=icr.io/cpopen/cpfs \ --namespace=${OADP_PROJECT} \ --cpd-scheduler-namespace=${PROJECT_SCHEDULING_SERVICE} \ --log-level=debug \ --verbose
- Environments without the scheduling service
-
cpd-cli oadp install \ --component=cpdbr-tenant \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --cpdbr-hooks-image-prefix=icr.io/cpopen/cpd \ --cpfs-image-prefix=icr.io/cpopen/cpfs \ --namespace=${OADP_PROJECT} \ --log-level=debug \ --verbose
The cluster pulls images from a private container registry
Note: Before you can install images from a private container registry, check that an image content source policy was configured. For details, see Configuring an image content source policy for IBM Software Hub software images.- Environments with the scheduling service
-
cpd-cli oadp install \ --component=cpdbr-tenant \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --cpdbr-hooks-image-prefix=${PRIVATE_REGISTRY_LOCATION}/cpd \ --cpfs-image-prefix=${PRIVATE_REGISTRY_LOCATION}/cpfs \ --cpd-scheduler-namespace=${PROJECT_SCHEDULING_SERVICE} \ --namespace=${OADP_PROJECT} \ --log-level=debug \ --verbose
- Environments without the scheduling service
-
cpd-cli oadp install \ --component=cpdbr-tenant \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --cpdbr-hooks-image-prefix=${PRIVATE_REGISTRY_LOCATION}/cpd \ --cpfs-image-prefix=${PRIVATE_REGISTRY_LOCATION}/cpfs \ --namespace=${OADP_PROJECT} \ --log-level=debug \ --verbose
- Optional: Verify that the cpdbr-tenant service pod is
running:
oc get pod -n ${PROJECT_CPD_INST_OPERATORS} | grep cpdbr
- Verify that the following IBM Fusion recipe was
installed:
oc get frcpe -n ${PROJECT_CPD_INST_OPERATORS} ibmcpd-tenant
- If the IBM Software Hub
scheduling service is installed, verify that
the following recipe was
installed:
oc get frcpe -n ${PROJECT_SCHEDULING_SERVICE} ibmcpd-scheduler
- Verify that the appropriate version of the cpdbr service
was
installed:
oc exec -n ${PROJECT_CPD_INST_OPERATORS} $(oc get -n ${PROJECT_CPD_INST_OPERATORS} po -l component=cpdbr-tenant,icpdsupport/app=br-service -o name) -- /cpdbr-scripts/cpdbr-oadp version
Example output of the command:Client: Version: 5.1.0 Build Date: <timestamp> Build Number: <xxx>
- Log in to Red Hat
OpenShift Container Platform as a cluster
administrator.
- To upgrade the
cpdbr service on the source cluster, do the following steps.
- Log in to Red Hat
OpenShift Container Platform as a cluster
administrator.
${OC_LOGIN}
Remember:OC_LOGIN
is an alias for theoc login
command. - In the IBM Software Hub operators project,
upgrade the cpdbr service.
The cluster pulls images from the IBM Entitled Registry
- Environments with the scheduling service
-
cpd-cli oadp install \ --upgrade=true \ --component=cpdbr-tenant \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --cpfs-image-prefix=icr.io/cpopen/cpfs \ --cpd-scheduler-namespace=${PROJECT_SCHEDULING_SERVICE} \ --namespace=${OADP_PROJECT} \ --log-level=debug \ --verbose
- Environments without the scheduling service
-
cpd-cli oadp install \ --upgrade=true \ --component=cpdbr-tenant \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --cpfs-image-prefix=icr.io/cpopen/cpfs \ --namespace=${OADP_PROJECT} \ --log-level=debug \ --verbose
The cluster pulls images from a private container registry
Note: Before you can install images from a private container registry, check that an image content source policy was configured. For details, see Configuring an image content source policy for IBM Software Hub software images.- Environments with the scheduling service
-
cpd-cli oadp install \ --upgrade=true \ --component=cpdbr-tenant \ --namespace=${OADP_OPERATOR_NS} \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --cpfs-image-prefix=${PRIVATE_REGISTRY_LOCATION}/cpfs \ --cpd-scheduler-namespace=${PROJECT_SCHEDULING_SERVICE} \ --namespace=${OADP_PROJECT} \ --log-level=debug \ --verbose
- Environments without the scheduling service
-
cpd-cli oadp install \ --upgrade=true \ --component=cpdbr-tenant \ --namespace=${OADP_OPERATOR_NS} \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --cpfs-image-prefix=${PRIVATE_REGISTRY_LOCATION}/cpfs \ --namespace=${OADP_PROJECT} \ --log-level=debug \ --verbose
- Optional: Verify that the cpdbr-tenant service pod is
running:
oc get pod -n ${PROJECT_CPD_INST_OPERATORS} | grep cpdbr
- Verify the version of the cpdbr
service:
oc exec -n ${PROJECT_CPD_INST_OPERATORS} $(oc get -n ${PROJECT_CPD_INST_OPERATORS} po -l component=cpdbr-tenant,icpdsupport/app=br-service -o name) -- /cpdbr-scripts/cpdbr-oadp version
Example output of the command:Client: Version: 5.1.0 Build Date: <timestamp> Build Number: <xxx>
- Log in to Red Hat
OpenShift Container Platform as a cluster
administrator.
When the cpdbr service is installed on the source cluster, the cpdbr component services are deployed and required permissions and cluster role bindings (ClusterRole, ClusterRoleBinding, Role, RoleBinding) are created.
4. Creating volume snapshot classes on the source cluster
VolumeSnapshotClass
has by default
deletionPolicy
set to Delete
. Creating new
VolumeSnapshotClasses
with a Retain
deletion policy is recommended
to ensure that the underlying snapshot and VolumeSnapshotContent
object remain
intact, as protection against accidental or unintended deletion. For more information, see
Deleting a volume snapshot in the Red Hat
OpenShift documentation.- Log in to Red Hat
OpenShift Container Platform as a cluster
administrator.
${OC_LOGIN}
Remember:OC_LOGIN
is an alias for theoc login
command. - If you are backing up IBM Software Hub on
Red Hat
OpenShift Data Foundation storage, create the following
volume snapshot classes:
cat << EOF | oc apply -f - apiVersion: snapshot.storage.k8s.io/v1 deletionPolicy: Retain driver: openshift-storage.rbd.csi.ceph.com kind: VolumeSnapshotClass metadata: name: ocs-storagecluster-rbdplugin-snapclass-velero labels: velero.io/csi-volumesnapshot-class: "true" parameters: clusterID: openshift-storage csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage EOF
cat << EOF | oc apply -f - apiVersion: snapshot.storage.k8s.io/v1 deletionPolicy: Retain driver: openshift-storage.cephfs.csi.ceph.com kind: VolumeSnapshotClass metadata: name: ocs-storagecluster-cephfsplugin-snapclass-velero labels: velero.io/csi-volumesnapshot-class: "true" parameters: clusterID: openshift-storage csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage EOF
- If you are backing up IBM Software Hub on
IBM Storage Scale storage, create the following
volume snapshot class:
cat << EOF | oc apply -f - apiVersion: snapshot.storage.k8s.io/v1 deletionPolicy: Retain driver: spectrumscale.csi.ibm.com kind: VolumeSnapshotClass metadata: name: ibm-spectrum-scale-snapshot-class labels: velero.io/csi-volumesnapshot-class: "true" EOF
- If you are backing up IBM Software Hub on
Portworx storage, create the following volume
snapshot class:
cat << EOF | oc apply -f - apiVersion: snapshot.storage.k8s.io/v1 deletionPolicy: Retain driver: pxd.portworx.com kind: VolumeSnapshotClass metadata: name: px-csi-snapclass-velero labels: velero.io/csi-volumesnapshot-class: "true" EOF
- If you are backing up IBM Software Hub on NetApp Trident storage, create the following volume snapshot
class:
cat << EOF | oc apply -f - apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-snapclass labels: velero.io/csi-volumesnapshot-class: "true" driver: csi.trident.netapp.io deletionPolicy: Retain EOF