IBM DataPower Gateway transformation guide
This guide provides the steps needed to migrate an on-premise IBM DataPower Gateway deployment to IBM Cloud Pak for Integration.
Prepare for your migration
Confirm that you have installed IBM Cloud Pak for Integration and deployed the Gateway operator (datapower-operator
). If you have not done so already, see Installing.
Additionally, ensure that you have:
working knowledge of our latest
DataPowerService
API version (v1beta3
).familiarity with the
domains
API (Guide: Domain configuration).
Migrate
Overview: IBM DataPower Gateway transformation guide
Complete the following tasks to migrate your IBM DataPower Gateway deployment to Cloud Pak for Integration:
Backing up your existing IBM DataPower Gateway deployment - Extract the IBM DataPower Gateway configuration from an existing physical or virtual appliance in the form of a Backup archive.
Generating ConfigMap YAMLs for application domains - Format the configuration for deployment in Kubernetes.
Creating ConfigMaps in OpenShift cluster - Migrate your standard IBM DataPower Gateway configuration (
*.cfg
files) and files in thelocal:///
filesystem to ConfigMaps in Kubernetes.Creating Kubernetes Secrets in your OpenShift cluster - Create various crypto & TLS material needed by your IBM DataPower Gateway service configurations.
Creating an admin user credential - Create Gateway user credentials (for example, the
admin
password) and migrate to Secrets in Kubernetes.Building the DataPowerService resource spec - Create a
DataPowerService
resource spec to deploy the configuration, such that the resulting IBM DataPower Gateway pods are functionally equivalent to the source IBM DataPower Gateway.Deploying the DataPowerService resource - Deploying the
DataPowerService
custom resource in your cluster, completing the migration.
Backing up your existing IBM DataPower Gateway deployment
Log in to your existing IBM DataPower Gateway WebGUI as the
admin
user.From the
default
domain, select Export Configration in the Control Panel.Select either Create a backup of one or more application domains or Create a backup of the entire system, depending on how many domains you intend to migrate.
Follow the system prompts.
Once complete, download the backup ZIP.
Generating ConfigMap YAMLs for application domains
Download the
migrate-backup.sh
script.This script is designed to automatically generate the ConfigMaps for each application domain from an IBM DataPower Gateway backup:
ConfigMap YAML for each domain cfg file
ConfigMap YAML for each domain's
local:///
file system
Run the script against your backup ZIP.
./migrate-backup.sh backup.zip
If you only wish to migrate a single domain, specify the
-d
or--domain
argument. For example:./migrate-backup.sh --domain test backup.zip
Inspect the generated output for details on where output files are generated, for example,
YAML will be generated in: backup-output
Review the generated YAML files.
Files ending in
-cfg.yaml
contain the domain's configuration incfg
format.Files ending in
-local.yaml
contain the domain'slocal:///
file system, in.tar.gz
format.
You apply the YAML (both formats) in the next step.
Creating ConfigMaps in OpenShift cluster
In the appropriate OpenShift project (namespace), apply the generated YAML for each domain that you wish to migrate. These ConfigMaps will be used later in Building the DataPowerService resource spec.
Using your
oc
CLI, switch to the project (namespace) you wish to deploy the migrated IBM DataPower Gateway.oc project <namespace>
Apply the generated YAML files. Be sure to apply all YAML files for each domain you wish to migrate.
Example of single domain with a
cfg
andlocal
YAML each:cd backup-output oc apply -f domain-cfg.yaml oc apply -f domain-local.yaml
Example using bash scripting to apply all YAMLs from the backup at once:
for yaml in $(find backup-output -name '*.yaml'); do oc apply -f $yaml done
Once the YAML is applied, check the cluster to ensure that everything looks correct:
oc get configmap
Creating Kubernetes Secrets in your OpenShift cluster
The TLS keys and certificates used by your IBM DataPower Gateway services must be stored in Kubernetes secrets. These secrets will be used later in Building the DataPowerService resource spec.
Gather the keys and certificates you wish to use.
Note: You cannot export the private keys from an existing physical or virtual appliance.For each key/cert pair or set of crypto, create a Secret with an appropriate name to reference later:
oc create secret tls <my-tls-secret> --key=/path/to/my.crt --cert=/path/to/my.key
or for generic (non-TLS) crypto:
oc create secret generic <my-crypto-secret> --from-file=/path/to/cert --from-file=/path/to/key
Refer to the Kubernetes documentation for details on the differences among Secrets.
Creating an admin user credential
Following security best-practices, the admin
user credentials are stored in a Kubernetes secret.
Create the secret using oc
, specifying the password via CLI, and noting the name of the Secret as you will need this later:
oc create secret generic admin-credentials --from-literal=password=admin
Example: In the preceding command, admin-credentials
is the name of the Secret, and admin
is the password.
Building the DataPowerService resource spec
Recommended: Open our DataPowerService API docs for reference as you build your custom resource spec.
Open your editor of choice, and start with the following template:
apiVersion: datapower.ibm.com/v1beta3 kind: DataPowerService metadata: name: migration-example spec: replicas: 1 version: 10.5-lts license: accept: true use: production license: L-RJON-CCAT5F
Edit the
spec.version
andspec.license
to choose your desired firmware version and edition.Refer to our Licenses guide for appropriate values for
spec.license.license
and how they map to thespec.version
andspec.license.use
.Refer to the
spec.license
andspec.version
API docs for details on these fields.
Add a
spec.users
definition for youradmin
user, using the Secret name (admin-credentials
) created in the Create an admin user credential step for the value ofpasswordSecret
. See thespec.users
API docs for details.For example:
users: - name: admin accessLevel: privileged passwordSecret: admin-credentials
Add a
spec.domains
definition, with an entry for each application domain you wish to deploy.For reference, see the Domain configuration guide.
As an example, let's assume a domain name of
example
. Let's also assume that we createdexample-cfg
andexample-local
ConfigMaps for this domain, containing its configuration andlocal:///
file system respectively. Thespec.domains
definition would be:domains: - name: example dpApp: config: - example-cfg local: - example-local
Next, update the domain object to include any
certs
definitions, referencing the Secrets you created in an earlier step containg TLS or other crypto material.As an example, let's assume we created a Secret named
example-service
which contains a TLS key/cert pair for a service defined in thisexample
application domain. The amended domains spec would be:domains: - name: example certs: - certType: usrcerts secret: example-service dpApp: config: - example-cfg local: - example-local
Repeat the full procedure in this step for each application domain you wish to deploy.
You should now have a complete DataPowerService definition. Putting the preceding examples together would give us:
apiVersion: datapower.ibm.com/v1beta3 kind: DataPowerService metadata: name: migration-example spec: replicas: 1 version: 10.0-cd license: accept: true use: production license: L-RJON-BYDR3Q users: - name: admin accessLevel: privileged passwordSecret: admin-credentials domains: - name: example certs: - certType: usrcerts secret: example-service dpApp: config: - example-cfg local: - example-local
Save the YAML to a file of your choosing. For subsequent examples, we'll use
migration-example.yaml
.
Deploying the DataPowerService resource
Create the DataPowerService resource in the cluster.
oc apply -f migration-example.yaml
Check the status of the deployment to ensure successful migration.
# full view oc get all # just the DataPowerService instance(s) oc get dp
Refer to our guide on operand status for more information.
If the DataPowerService is operational and
Ready
, continue on to modernizing your IBM DataPower Gateway workloads.
Modernize
Now that you have successfully migrated an existing IBM DataPower Gateway workload to IBM Cloud Pak for Integration on OpenShift, you can begin leveraging features that modernize your deployment.
Automatically scale your IBM DataPower Gateway pods horizontally or vertically using Pod Auto-Scaling.
Learn how the IBM DataPower Operator manages IBM DataPower Gateway upgrades in Operand Upgrades.
Fine-tune your topology and scheduling of IBM DataPower Gateway pods using affinity, tolerations, and nodeSelector properties in the
DataPowerService
custom resource.Check out our release notes for the latest details on features and changes within the IBM DataPower Operator.