Domain Configuration

The DataPower Operator supports managing DataPower configuration through the domains spec on the DataPowerService custom resource. This allows for independent management of each application domain on the DataPower Gateway.

Before continuing with the below guides, it would be good to familiarize yourself with the domains API docs.

Configuring a domain with dpApp

This example shows the complete end-to-end flow for creating all resources necessary to configure a domain via the dpApp configuration method.

Starting with a testdomain structure as follows:

$ ls
testdomain

$ tree
.
`-- testdomain
    |-- config
    |   `-- testdomain.cfg
    `-- local
        `-- test.xsl

3 directories, 2 files

Create the config ConfigMap from the cfg file:

kubectl create configmap testdomain-config \
  --from-file=/path/to/testdomain/config/testdomain.cfg

Create a tarball for the local files:

tar --directory=/path/to/testdomain/local -czvf testdomain-local.tar.gz .

The created tarball should have the local file(s) at the top-level:

$ tar -tzvf testdomain-local.tar.gz
drwxrwxr-x admin/admin   0 2020-04-09 15:37 ./
-rw-rw-r-- admin/admin  14 2020-04-09 15:27 ./test.xsl

Create the local ConfigMap from the tarball:

kubectl create configmap testdomain-local \
  --from-file=/path/to/testdomain-local.tar.gz

DataPowerService snippet with domains spec for the testdomain:

spec:
  domains:
  - name: "testdomain"
    dpApp:
      config:
      - "testdomain-config"
      local:
      - "testdomain-local"

Once deployed via the DataPower Operator, the config and local files could be validated from the DataPower CLI.

To attach to a given DataPower Pod:

kubectl -n namespace attach -it pod/name -c datapower

From the DataPower CLI (once logged in):

idg# config
idg(config)# switch testdomain
idg[testdomain](config)# dir local:
   File Name                    Last Modified                    Size
   ---------                    -------------                    ----
   test.xsl                     Apr 9, 2020 3:54:03 PM           14

   227782.0 MB available to local:

idg[testdomain](config)# dir config:
   File Name                    Last Modified                    Size
   ---------                    -------------                    ----
   testdomain.cfg               Apr 9, 2020 3:54:03 PM           24

   227782.0 MB available to config:

Adding certs to a domain

Creating a generic Secret

To create a generic secret containing your crypto information, use:

kubectl create secret generic <my-crypto-secret> \
  --from-file=/path/to/cert \
  --from-file=/path/to/key

Adding the Secret to certs

domains:
- name: "example"
  certs:
  - certType: "usrcerts"
    secret: "<my-crypto-secret>"

Creating a TLS Secret

To create a tls Secret containing a TLS key / cert pair, use:

kubectl create secret tls <my-tls-secret> \
  --key=/path/to/my.crt \
  --cert=/path/to/my.key

Adding the Secret to certs

domains:
- name: "example"
  certs:
  - certType: "usrcerts"
    secret: "<my-tls-secret>"

Updating Domains

As described above, Domains comprise ConfigMaps and Secrets that exist in the Kubernetes cluster. It is expected that over time a Domain configuration will need to be updated, including the config and local ConfigMaps and Secrets that comprise the certs for the Domain. These ConfigMaps and Secrets can be updated in-place in the cluster, and the DataPower Operator will automatically reconcile those changes.

When an update is detected on a ConfigMap or Secret referenced by a Domain, a rolling update is triggered across the Pods in the StatefulSet. This allows for Domains to be updated without modifying the DataPowerService Custom Resource directly.

For example, if we define the following Domain spec:

spec:
  domains:
  - name: "testdomain"
    certs:
    - certType: "usrcerts"
      secret: "testdomain-certs"
    dpApp:
      config:
      - "testdomain-config"
      local:
      - "testdomain-local"

We could then update any of the following objects by deleting them in the cluster and recreating them with the same name:

  • secret/testdomain-certs
  • configmap/testdomain-config
  • configmap/testdomain-local

The StatefulSet will include annotations that provide hashes for the current resources of a particular domain. You can see these by describing the StatefulSet resource in the cluster:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  ...
spec:
  ...
  template:
    metadata:
      annotations:
        datapower.ibm.com/domains.default.reconciled: fd1a685cd12d5eeffd0d6c40209483ab80d8e87c216a026b209f089f13a821fb
        datapower.ibm.com/domains.test-domain.reconciled: 2fcc805fec16d4f041c7eef5ca86eabc8598ef122926455c064f1d0691b1411d

Batching multiple updates

A potential complication created by the reconciliation of Domain resources described above is that making changes to multiple ConfigMaps and/or Secrets may inadvertently trigger multiple rolling updates and, thus, cause unnecessary churn among the DataPower Pods.

To avoid this, we can use the DataPowerService's pause annotation to temporarily pause reconciliation while we make changes to the ConfigMaps and Secrets used by the DataPowerService Custom Resource. Once all changes have been made, we remove the pause annotation to restart reconciliation and, thus, rollout out all the changes in a single update.

For example, suppose we have a DataPowerService Custom Resource with configuration provided by multiple ConfigMaps, such as one that defines multiple domains:

apiVersion: datapower.ibm.com/v1beta3
kind: DataPowerService
metadata:
  name: example-dpservice
spec:
  domains:
  - name: appdomain1
    dpApp:
      config:
      - appdomain1-config
  - name: appdomain2
    dpApp:
      config:
      - appdomain2-config

Now suppose we wish to make changes to each domain's configuration. If we modify the appdomain1-config ConfigMap and then the appdomain2-config ConfigMap, even within a few seconds of each other, we will most likely see a rolling update for each updated ConfigMap, i.e., each DataPower pod would updated twice rather than just once.

But to batch the changes, we can use kubectl edit to manually add the pause annotation to the DataPowerService. Or, for convenience, we may use the kubectl patch command to add the pause annotation without manually editing the DataPowerService Custom Resource like so:

# Pause DataPowerService reconciliation
kubectl -n $KUBE_NAMESPACE patch dp $DPSERVICE_NAME --type='merge' -p='{"metadata":{"annotations":{"datapower.ibm.com/pause":"true"}}}'

However we add the pause annotation, we should now see datapower.ibm.com/pause: "true" under the DataPowerService Custom Resource's annotations:

apiVersion: datapower.ibm.com/v1beta3
kind: DataPowerService
metadata:
  annotations:
    datapower.ibm.com/pause: "true"
  name: example-dpservice
spec:
  domains:
  - name: appdomain1
    dpApp:
      config:
      - appdomain1-config
  - name: appdomain2
    dpApp:
      config:
      - appdomain2-config

We can confirm that reconciliation is paused by observing the ReconciliationPausedWarning condition on the DataPowerService status:

status:
  conditions:
  - lastTransitionTime: "2023-06-05T18:40:30Z"
    message: Reconciliation is paused indefinitely
    reason: ReconciliationPausedIndefinitely
    status: "True"
    type: ReconciliationPausedWarning

At this point we may make changes to as many ConfigMaps and Secrets referenced by the DataPowerService's Domains without triggering a rolling update.

Once we have finished, we can remove the pause annotation to resume reconciliation. Again, we can use kubectl patch:

# Resume DataPowerService reconciliation
kubectl -n $KUBE_NAMESPACE patch dp $DPSERVICE_NAME --type='json' -p='[{"op": "remove", "path": "/metadata/annotations/datapower.ibm.com~1pause"}]'

We should now see that the pause annotation has been removed:

apiVersion: datapower.ibm.com/v1beta3
kind: DataPowerService
metadata:
  name: example-dpservice
spec:
  domains:
  - name: appdomain1
    dpApp:
      config:
      - appdomain1-config
  - name: appdomain2
    dpApp:
      config:
      - appdomain2-config

Once reconciliation has resumed, all of the changes will be rolled out in a single update.