Backing up and restoring your IBM App Connect resources and persistent volumes on Red Hat OpenShift
You can back up and restore the IBM® App Connect resources and persistent volumes (PVs) in your cluster by using OpenShift API for Data Protection (OADP), which safeguards customer applications on Red Hat OpenShift and facilitates disaster recovery.
If IAM is enabled in your deployment, see Backing up and restoring IBM Cloud Pak® for Integration for information about how to back up and restore your system.
OADP is based on the open source Velero tool that backs up, restores, and migrates Kubernetes clusters and persistent volumes (PVs). OADP provides a set of APIs to back up and restore Kubernetes resources, internal images, and PVs. Kubernetes resources and internal images are backed up to object storage, and persistent volumes (PVs) are backed up by creating snapshots or by using Restic, which is an integrated file-level backup tool.
OADP provides default Velero plugins to integrate with cloud storage providers that support back up and restore. App Connect also provides a custom Velero plugin to aid with the restoration of pods. This custom plugin removes network-specific pod annotations (such as IP addresses) that are included in the backup data for pods, to ensure that the restored pods can be reached. The custom plugin is built into a container image that is stored in the IBM Cloud Container Registry. You specify which plugins to use for your backup and restore operations when you configure a backup.
For more information, see OADP features and plugins.
OADP is available as an Operator in the Red Hat OpenShift OperatorHub.
Setting up your environment for backup and restore
To support back up and restore, you need to first prepare your environment by completing the following tasks:
Before you begin
- Ensure that you have cluster administrator authority with
cluster-admin
permissions. - Set up secure object storage for your backups; for example, Amazon Web Services (AWS) S3, Microsoft Azure
S3, or S3-compatible object storage ( such as Multicloud Object Gateway
or MinIO). For a list of the supported and unsupported object
storage providers, and limitations, see About installing OADP
in the Red Hat
OpenShift documentation.Requirement for an air-gapped environment: If you want to back up and restore your App Connect resources in an air-gapped cluster that is not connected to the internet, your object store must be accessible from within the restricted network.
Installing the OADP Operator
You can install the OADP Operator from the Red Hat OpenShift web console.
To install the Operator, search for the OADP Operator in the OperatorHub and then install it as described in Installing the OADP Operator in the Red Hat OpenShift documentation. Choose the stable-1.3 update channel and accept the remaining default settings.
By default, the Operator is installed in the openshift-adp
namespace (or
project). When the installation completes, you can click View installed Operators in
Namespace and then click OADP Operator to view details about the
APIs that are provided. On the Details tab, these APIs are presented as a
series of tiles. You will use the API named DataProtectionApplication
in a
subsequent task to create a Data Protection Application resource that defines configuration settings
for your backup.
Making the custom Velero plugin accessible to your cluster's nodes
To enable the custom Velero plugin to be installed, you need to specify its image and name when you configure your backup. Therefore, you must ensure that the registry where the plugin is stored is accessible to your cluster's nodes.
In an online cluster with access to public registries, you can pull the custom plugin image
directly from the IBM Cloud Container
Registry on the Docker server
cp.icr.io
. If you don't have a Kubernetes pull secret
that allows you to pull App Connect images from this registry, you
need to first obtain an entitlement key. You can then either add the entitlement key as a pull
secret to the openshift-adp
namespace where the OADP Operator is installed, or add it to the global image pull secret
for all namespaces in your cluster.
- To obtain an entitlement key, complete the steps in Obtaining an entitlement key.
- To add the entitlement key to the
openshift-adp
namespace, complete the steps in Adding an entitlement key to a namespace.If you prefer to add the entitlement key to the global image pull secret, you can update this secret to add the
cp
username, and the entitlement key as the password. For more information, see Updating the global cluster pull secret in the Red Hat OpenShift documentation.
- If you are using an air-gapped cluster, pull the custom plugin image from the IBM Cloud Container
Registry to an internal (or local)
Docker registry in your restricted network. (Ensure that the local registry allows path separators
in the image name.)
- To obtain an entitlement key that is required to pull images from the IBM Cloud Container Registry, complete the steps in Obtaining an entitlement key.
- Log in to the
cp.icr.io
Docker server by running the docker login command withcp
as your username and your entitlement key as the password.docker login cp.icr.io -u cp -p myEntitlementKey
- Use Docker to pull the custom plugin
image.
docker pull cp.icr.io/cp/appc/acecc-velero-plugin-prod@sha256:c59d255bd5bcbc9603a35378ed04686757d3b32ea3b8c27a07f834988aa85665
For a list of each image that is provided for an IBM App Connect Operator version, see Image locations for the custom Velero plugin in the IBM Cloud Container Registry.
- Use docker login to log in to your local registry. Then, push the image to this registry.
- Ensure that the local registry is accessible from your cluster's nodes. To configure your
cluster with authentication credentials to pull the custom plugin image from the local registry,
complete the following steps:
- From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
- Create a pull secret, which contains the username and password for authenticating to the local
registry, in the
openshift-adp
namespace. Alternatively, add these credentials to the global image pull secret for your cluster as described in Updating the global cluster pull secret in the Red Hat OpenShift documentation.
Configuring your object store for backup and restore operations
To use your object store as a backup location, you need to configure the object store for OADP. The way in which you do so is specific to the object store provider, but basic requirements apply.
- Create a storage (S3) bucket or storage container in which to store the App Connect backup.
- Create an account with authentication credentials to access the object store, and assign permissions to define what actions are allowed on the bucket or container for backup and restore.
- Create a credentials-velero file that contains the account credentials for
accessing the object store. The contents of this file depend on your object store and whether you
want to back up not just your resources to a backup location, but also back up persistent volumes
(PVs) to a snapshot location.
The following example shows the
credentials-velero
file contents for AWS or S3-compatible object storage, where the same credentials are used for the backup and snapshot locations under adefault
profile.# cloud [default] aws_access_key_id=AWS_ACCESS_KEY_ID aws_secret_access_key=AWS_SECRET_ACCESS_KEY
- If your backup and snapshot locations use the same credentials or if you do not require a
snapshot location, create a default secret in the
openshift-adp
namespace to store the credentials in your credentials-velero file. You will need to specify this secret when you configure your backup later. (If you are using different credentials for your backup and snapshot locations, you might need to create two secrets if required for your object store.) In these instructions, it is assumed that the same credentials are used for the backup and snapshot locations for S3-compatible object storage MinIO.- From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
- Create a secret named
cloud-credentials
, with a key (cloud
), and the file path and name of thecredentials-velero
file.oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
- Configuring Amazon Web Services and About backup and snapshot locations and their secrets
- Configuring Microsoft Azure and About backup and snapshot locations and their secrets
- Configuring Google Cloud Platform and About backup and snapshot locations and their secrets
- Multicloud Object Gateway and About backup and snapshot locations and their secrets
For other S3-compatible object storage, see the documentation for the object storage provider.
Configuring backup by using the Data Protection Application
To run backup and restore operations, you need to first define your backup configuration
settings, which identify details such as the object store provider, the location and credentials for
the backup location, the Velero plugins to install, and the
snapshot location. You define these configuration settings by deploying a
DataProtectionApplication
resource, which is one of the APIs that the OADP Operator provides.
To deploy a DataProtectionApplication
resource, complete the following
steps:
- From the navigation in the Red Hat OpenShift web console, click .
- Ensure that the
openshift-adp
namespace (or project) is selected and then click OADP Operator. - From the Details tab on the
Operator details
page, locate the DataProtectionApplication tile and click Create instance. - From the Create DataProtectionApplication page, complete the fields in the
Form view, or switch to the YAML view to define your settings.
The values that you specify depend on the type of object store, as detailed in the Red Hat OpenShift documentation. For example, if you are using an AWS object store, follow the instructions in Configuring the OpenShift API for Data Protection with Amazon Web Services.
The following example shows the YAML settings for backing up resources to an S3-compatible MinIO object store, where:
- The spec.backupLocations.velero block defines the endpoint URL for
accessing the object store instance, and the credentials (key and secret) for authenticating to the
object store. (In this example, the same credentials are used for the backup and snapshot
locations.) The block also specifies a bucket to use as the backup (and default) storage location,
and identifies the backup storage provider. The text string
velero
is also specified as a prefix (or path inside a bucket), which can be applied to Velero backups if the bucket is used for multiple purposes. - The spec.configuration.velero block identifies the default Velero plugins (mandatory and storage-specific) to install. This block
also identifies the image for the custom Velero plugin in the IBM Cloud Container
Registry. This image is pulled from
the registry by using a pull secret that stores your entitlement key.
For a list of each custom Velero plugin image that is provided for an IBM App Connect Operator version, see Image locations for the custom Velero plugin in the IBM Cloud Container Registry.
- The spec.configuration.restic block enables Restic installation for the PVs.
- The spec.snapshotLocations.velero block identifies settings for the snapshot location.
apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-minio namespace: openshift-adp spec: backupLocations: - velero: config: profile: default region: minio s3ForcePathStyle: 'true' s3Url: 'http://yourS3StorageLocation:9000' credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: acm-backup prefix: velero provider: aws configuration: restic: enable: true velero: customPlugins: - image: 'cp.icr.io/cp/appc/acecc-velero-plugin-prod@sha256:c59d255bd5bcbc9603a35378ed04686757d3b32ea3b8c27a07f834988aa85665' name: appcon-plugin defaultPlugins: - openshift - aws - kubevirt snapshotLocations: - velero: config: profile: default region: minio provider: aws
- The spec.backupLocations.velero block defines the endpoint URL for
accessing the object store instance, and the credentials (key and secret) for authenticating to the
object store. (In this example, the same credentials are used for the backup and snapshot
locations.) The block also specifies a bucket to use as the backup (and default) storage location,
and identifies the backup storage provider. The text string
- Click Create.Tip: You can alternatively use the command line to deploy the
DataProtectionApplication
resource by defining your YAML manifest in a file (for example, DataProtectionApplication.yaml) and then running the following command.oc apply -f DataProtectionApplication.yaml
- To verify that the installation was successful, complete the following steps:
- Run this command to view the OADP resources in the
openshift-adp
namespace.oc get all -n openshift-adp
You should see a list of resources such as pods, services, daemon sets, deployments, replica sets, and image streams. Look for an entry for the Velero pod, which is shown in the format
pod/velero-uniqueID
(for example,pod/velero-75b9bdc98d-c7t7z
). The pod should display a status ofRunning
to indicate that the Velero instance is running. - Verify that the deployed
DataProtectionApplication
resource is reconciled.oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'
In the output, the
type
value should be set toReconciled
. - Verify that the Velero pod has the custom plugin
defined.
oc get pods -n openshift-adp veleroPodName -o yaml
For example:oc get pods -n openshift-adp velero-75b9bdc98d-c7t7z -o yaml
In the status section of the output, check for the custom plugin name (for example,appcon-plugin
) that you specified in theDataProtectionApplication
custom resource (CR). - Verify that a
BackupStorageLocation
object is available. This object is generated for the deployedDataProtectionApplication
resource within theopenshift-adp
namespace, and identifies the bucket where backup objects are stored, and the S3 URL and credentials for accessing this bucket.oc get backupStorageLocation -n openshift-adp
In the output, the
BackupStorageLocation
object is named in the formatDPAname-1
. Verify thatPHASE
has a value ofAvailable
:NAME PHASE LAST VALIDATED AGE DEFAULT oadp-minio-1 Available 30s 203d true
Tip: From the Red Hat OpenShift web console, you can view theBackupStorageLocation
object by accessing the BackupStorageLocation tab for the OADP Operator in theopenshift-adp
namespace.
- Run this command to view the OADP resources in the
Your system is now configured and ready to run backup and restore operations by using OADP.
Backing up your App Connect resources and PVs
You can specify the resources to back up and the storage location by using a supplied backup-script.sh script. This script backs up only App Connect resources and PVs.
Before you run the backup, ensure that your deployed DataProtectionApplication
resource is in a Ready
state. Also wait for a period of minimal activity to run the
backup script to ensure that the correct state is captured for resources such as flows.
To back up your App Connect resources and PVs, complete the following steps:
- From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
- Download the attached backup-script.zip file, which contains the backup-script.sh script.
- Extract the contents of the .zip file to a directory on your local computer.
- Navigate to this directory and then run the backup-script.sh script, where:
- namespaces_to_back_up identifies one or more namespaces that contain resources to be backed up. Use a comma separator if you need to specify multiple namespaces.
- backupName is a unique name for this backup.
./backup-script.sh namespaces_to_back_up backupName
For example:
./backup-script.sh ace appconnect-backup01
Tip: Make a note of the backupName value because you will need to specify it if you need to restore the backup. You can also find this value later by running this command:oc get backup -n openshift-adp
The following sequence of steps occurs when the backup script runs:
- The script applies resource-specific labels to the
primary
custom resources (CRs) for all your App Connect Dashboard instances, App Connect Designer instances, switch servers, integration servers, integration runtimes, and configuration objects, which it finds in the specified namespaces. Managed or secondary resources that are owned by the deployedprimary
CRs are similarly labeled; for example, secrets, deployments, services, pods, replica sets, and persistent volume claims. - The script applies labels to your cluster-scoped or namespace-scoped IBM App Connect Operator subscription, and any related
OperatorGroup
andCatalogSource
resources. - The script generates a
Backup
CR in theopenshift-adp
namespace with the following specification:- Assigns the backupName value that you specified earlier to metadata.name.
- Identifies which resources to include in the backup based on the labels, and which resources to exclude from the backup.
- Assigns the name of the
BackupStorageLocation
object, which represents the storage location for the backed-up data, to spec.storageLocation.
- The script deploys the
Backup
CR and the OADP Operator initiates the backup. The backup runs in the background with no disruption to your system.
- Wait for the backup to complete. The following messages are output to indicate that the backup
is in progress, and to subsequently confirm its completion.
Waiting for backup to complete ... Backup has completed
Tip: From the Red Hat OpenShift web console, you can view theBackup
CR that was generated by accessing the Backup tab for the OADP Operator in theopenshift-adp
namespace. A successful backup is assigned aCompleted
status. - To verify that the backup was successful, complete the following steps:
- Inspect the backup that was created in your object store bucket.
- Run a test restore operation that uses the backup. For example, restore the backup to a designated cluster as described in Restoring your App Connect resources and PVs from a backup. When the operation completes, verify that you can see all your App Connect Designer instances (including the flows and accounts), App Connect Dashboard instances (including the BAR files in the content server), switch servers, integration servers, integration runtimes, and configuration objects. Workload and networking artifacts should also be restored.
Restoring your App Connect resources and PVs from a backup
If you need to restore App Connect resources and PVs that
were previously backed up, you can initiate the restore operation by deploying a
Restore
resource in your cluster, or by running a supplied
restore-script.sh script. You can restore a backup into the same cluster (for
an in-place recovery) or into a new cluster (which is typical for disaster recovery).
Before you begin
Before you run the restore operation, ensure that the following prerequisites are met:
- If you are restoring the backup in the same cluster, ensure that your deployed
DataProtectionApplication
resource is in aReady
state. - Ensure that your object store instance where the backup is stored is available, and that the credentials that were used to access the bucket to back up data are still valid.
- If you intend to restore the backup in a new cluster, prepare the new cluster as
follows:
- Ensure that the cluster has the same hostname, configuration, and access as the cluster that you backed up from. For example, ensure that the cluster can access the storage location that contains the backups. The cluster must also have the same storage classes as the original cluster, with the same names that are used by the instances that you back up.
- Install and configure the OADP Operator and then deploy a
DataProtectionApplication
resource that has same configuration as the resource in the old cluster.
Restoring a backup
Restore
resource or run a script to restore a backup, you need
to specify the name that was assigned when creating the backup. If you need to, you can find the
backup name by running the following command or by accessing the Backup tab
for the OADP Operator.oc get backup -n openshift-adp
When the restore operation runs, the backup name is used to identify which resources need to be restored, and the resources are then recreated and started where relevant. The resources are restored in an ordered sequence.
To initiate a restore operation for a backup that you created earlier, complete the following steps:
- Choose your preferred method for running the restore operation:
- In the namespace where the OADP Operator is installed, deploy a
Restore
resource that defines which resources you want to restore from a named backup.- From your local computer, create a text file (for example
appconnect_restore.yaml) with the following YAML content, where:
- metadata.name is the name of the
Restore
resource; for example,full-appconnect-restore
. - spec.backupName is the name of the backup that you want to restore; for
example,
appconnect-backup01
. - excludedResources identifies which resources the restore operation should exclude.
- restorePVs is set to
true
to indicate that persistent volumes should be restored.
apiVersion: velero.io/v1 kind: Restore metadata: name: restoreObjectName namespace: openshift-adp spec: backupName: backupName excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - clusterserviceversions - installplan restorePVs: true
- metadata.name is the name of the
- Deploy the resource in either of the following ways.
- Log in to the Red Hat OpenShift web console. Click the Import YAML icon , and then copy and paste the YAML content into the Import YAML editor. Then, click Create.
- From the command line, log in to your Red Hat
OpenShift cluster by using
the oc login command. Then, deploy the
Restore
resource as follows:oc apply -f appconnect_restore.yaml
- Check the status of the restore operation in either of the following ways.
- From the Red Hat
OpenShift web console, access the
Restore tab for the OADP Operator in the
openshift-adp
namespace. A successful restore object is assigned aCompleted
status. - Run the following command.
oc describe restore restoreObjectName -n openshift-adp
In the output, check the
Status
section. If the restore operation is still running, thePhase
value might be shown asInProgress
and theProgress
details should indicate the total number of items to be restored and the number of items that have currently been restored.Alternatively, run the following command.
oc get restore restoreObjectName -n openshift-adp -o jsonpath='{.status.phase}'
When complete, you should see the following output.
Phase: Completed
- From the Red Hat
OpenShift web console, access the
Restore tab for the OADP Operator in the
- From your local computer, create a text file (for example
appconnect_restore.yaml) with the following YAML content, where:
- Run the restore-script.sh script.
- From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
- Download the attached restore-script.zip file, which contains the restore-script.sh script.
- Extract the contents of the .zip file to a directory on your local computer.
- Navigate to this directory and then run the restore-script.sh script with
the backup name that you specified when backing up your data.
./restore-script.sh backup_to_restore_from
For example:
./restore-script.sh appconnect-backup01
- Wait for the script to complete. The following messages are output to indicate that the restore
is in progress, and to confirm its completion.
Waiting for restore to complete ... Restore has completed
-
To check whether the restore operation has been completed, run the following command.
oc get restore restoreObjectName -n openshift-adp -o jsonpath='{.status.phase}'
- In the namespace where the OADP Operator is installed, deploy a
- To verify that all your App Connect resources have been restored,
run the following command to list the resources that are in the namespace where the IBM App Connect Operator is installed.
oc get all -n namespaceName
You should see a list of resources such as pods, services, deployments, replica sets, stateful sets, jobs, routes, Dashboard instances, Designer instances, integration servers, integration runtimes, switch servers, and configuration objects.
You can also check the persistent volume claims, which should display aSTATUS
value ofBound
.oc get pvc -n namespaceName
Troubleshooting
If you encounter any issues with OADP backup and restore, see Troubleshooting in the Red Hat OpenShift documentation.