Deploying operators in a multi-namespace API Connect cluster

Deploy the Kubernetes operator files in multi-namespace cluster.

Before you begin

About this task

Note: If you are deploying a single-namespace API Connect cluster, do not use these instructions. Instead, go to Deploying operators in a single-namespace API Connect cluster.
  • These instructions apply only to native k8s deployments. They do not apply to OpenShift deployments.
  • The apiconnect operator is deployed as a cluster-scoped operator which watches every namespace in the cluster. The apiconnect operator can be installed in any namespace, although it is recommended that the API Connect operator is installed in its own dedicated namespace.
  • The DataPower operator must be deployed in a namespace where gateway subsystem install is planned.
  • Single ingress-ca certificate must be installed and used across the various namespaces where subsystems will be installed.


  1. Prepare your environment:
    1. Ensure KUBECONFIG is set for the target cluster:
      export KUBECONFIG=<path_to_cluster_config_YAML_file>

      An example path is /Users/user/.kube/clusters/<cluster_name>/kube-config-<cluster_name>.yaml

    2. Create namespaces on which subsystems are planned to be installed.

      The required namespaces are:

      • Apiconnect operator namespace
      • Gateway subsystem namespace
      • Management subsystem namespace
      • Portal subsystem namespace
      • Analytics subsystem namespace

      The following example values and deployment are used throughout these instructions:

      • An apiconnect operator is installed in operator namespace
      • A DataPower operator and gateway subsystem is installed in gtw namespace
      • The Management subsystem is installed in mgmt namespace
      • The Portal subsystem is installed in portal namespace
      • The Analytics subsystem is installed in a7s namespace
    3. Multi-namespace templates are provided in, inside the you downloaded in Obtaining product files:
  2. Install cert-manager and configure certificates:

    Use of a certificate manager adds convenience to the generation and management of certificate, but is not required. Whenever a custom resource (CR) takes a certificate secret name as input, you can point to any secret name, as long as the secret exists before deploying the CR, and the secret contains relevant certificate data. Typically, this is tls.crt, tls.key, and ca.crt files. See Certificates in a Kubernetes environment.

    You can obtain cert-manager v1.12.7 from the API Connect v10 distribution archive, or download it from

    1. Install cert-manager v1.12.7, do not specify a custom namespace as cert manager will be installed in its own namespace.

      Do not use Step 2.a to install a cert-manager if:

      1. kubectl apply -f cert-manager-1.12.7.yaml
      2. Wait for cert-manager pods to enter Running 1/1 status before proceeding. Use the following command to check:
        kubectl -n cert-manager get po

        There are 3 cert-manager pods in total.

    2. Install ingress-ca certificate:
      1. Locate ingress-ca certificate at helper_files/multi-ns-support/ingress-ca-cert.yaml
      2. Create ingress-ca certificate in one of the subsystems. For example, if we choose mgmt namespace:
        kubectl -n mgmt apply -f ingress-ca-cert.yaml
    3. Install common issuers on all the namespaces:
      1. Locate common-issuer.yaml in helper_files/multi-ns-support/
      2. Create common-issuer.yaml in all namespaces:
        kubectl create -f common-issuer.yaml -n mgmt 
        kubectl create -f common-issuer.yaml -n gtw 
        kubectl create -f common-issuer.yaml -n ptl 
        kubectl create -f common-issuer.yaml -n a7s 
    4. Obtain the ingress-ca secret created by cert manager and create the secret on all namespaces:

      The instructions in this step result in having a selfsigning-issuer in every namespace. This issuer is typically referenced by an API Connect CR in the namespace with:

        microServiceSecurity: certManager
          name: selfsigning-issuer
          kind: Issuer

      This issuer (and associated root certificate) does not need to be the same in every namespace, nor does it need to be the same for 2 CRs in the same namespace. Each subsystem can use its own selfsigning-issuer, because it is only used for certificates that are internal to the subsystem.

      To ensure that the ingress-issuer is in sync across namespaces, it should be backed by the same ingress-ca certificate. This enables, for example, APIM to use a client certificate that matches the CA of the portal-admin ingress endpoint. The sync is achieved by copying around the ingress-ca certificate in each namespace, as shown in the following steps. It does not matter that it is the same issuer, or whether it has a different name. However it is important is that the ingress-ca is the same for the all the "ingress-issuer" that are going to be used by the subsystems.

      1. Make sure ingress-ca certificate READY status is set to true:
        kubectl get certificate ingress-ca -n mgmt
        NAME         READY   SECRET       AGE
        ingress-ca   True    ingress-ca   9m24s
      2. Cert-manager creates a secret called ingress-ca in mgmt namespace which represents the certificate we created in Step 2.c.
      3. Use the following command to obtain the yaml format of the secret:
        kubectl get secret ingress-ca -n <namespace-where-ingress-ca-cert-is-created> -o yaml > ingress-ca-secret-in-cluster.yaml
      4. Remove unwanted content from the secret yaml:
        cat ingress-ca-secret-in-cluster.yaml | grep -v 'creationTimestamp' | grep -v 'namespace' | grep -v 'uid' | grep -v 'resourceVersion' > ingress-ca-secret.yaml
      5. Create the secret using the yaml we prepared in previous step on rest of the subsystem namespaces. In this example, in the gtw, a7s and ptl namespaces.
        kubectl create -f ingress-ca-secret.yaml -n gtw
        kubectl create -f ingress-ca-secret.yaml -n ptl
        kubectl create -f ingress-ca-secret.yaml -n a7s
    5. Install management certs in management namespace:
      1. Locate management-certs.yaml in helper_files/multi-ns-support/
      2. Create management-certs.yaml in the mgmt namespace:
        kubectl create -f management-certs.yaml -n mgmt
    6. Install gateway certs in the gateway namespace:
      1. Locate gateway-certs.yaml in helper_files/multi-ns-support/
      2. Create gateway-certs.yaml in the gtw namespace:
        kubectl create -f gateway-certs.yaml -n gtw
  3. Install the apiconnect operator:
    1. Install the ibm-apiconnect CRDs with the following command. Do not specify a namespace:
      kubectl apply -f ibm-apiconnect-crds.yaml
    2. Create a registry secret with the credentials to be used to pull down product images with the following command, replacing <namespace> with the desired namespace for the apiconnect operator deployment.
      kubectl -n <namespace> create secret docker-registry apic-registry-secret
                    --docker-username=<> --docker-password=<password>
      • For example, replace <namespace> with operator.
      • docker-password can be your artifactory API key.
      • -n <namespace> can be omitted if default namespace is being used for installation
    3. Locate and open ibm-apiconnect-distributed.yaml in a text editor of choice. Then, find and replace each occurrence of $OPERATOR_NAMESPACE with <namespace>, replacing <namespace> with the desired namespace for the deployment. In this example, the namespace is operator.
    4. Also in ibm-apiconnect-distributed.yaml, locate the image: keys in the containers sections of the deployment yaml right below imagePullSecrets:. Replace the placeholder values REPLACE-DOCKER-REGISTRY of the image: keys with the docker registry host location of the apiconnect operator image (either uploaded to own registry or pulled from public registry).
    5. Install ibm-apiconnect-distributed.yaml with the following command
      kubectl apply -f ibm-apiconnect-distributed.yaml
  4. Install DataPower operator:

    Deployment of the DataPower operator is only needed if you have at least one API Connect v10 Gateway subsystem (whether v5 compatible or not) to upgrade. If you are using DataPower in a different form factor such as an Appliance, you will not have an API Connect v10 Gateway subsystem to upgrade, and will not need the DataPower operator for your upgrade.

    Note: The DataPower Operator supports multiple instances of DataPower in the same cluster, separated by namespace. When deploying into multiple namespaces, ensure that any cluster-scoped resources are created with unique names for separate installations, such that they are not overwritten by subsequent installations. For example, DataPower cluster-scoped resources include ClusterRoleBindings.
    1. Create a registry secret for the DataPower registry with the credentials to be used to pull down product images with the following command, replacing <namespace> with the desired namespace for the deployment .(In this case gtw):
      kubectl -n <namespace> create secret docker-registry datapower-docker-local-cred
                    --docker-username=<> --docker-password=<password>
      • For example, replace <namespace> with gtw.
      • docker-password can be your artifactory API key.
      • -n <namespace> can be omitted if default namespace is being used for installation
    2. Create a DataPower admin secret, replacing <namespace> with the desired namespace for the deployment. For example, gtw. This secret will be used for $ADMIN_USER_SECRET later in the Gateway CR:
      kubectl -n <namespace> create secret generic datapower-admin-credentials --from-literal=password=admin

      -n <namespace> can be omitted if default namespace is being used for installation.

    3. Locate and open ibm-datapower.yaml in a text editor of choice. Then, find and replace each occurrence of default with <namespace>, replacing <namespace> with the desired namespace for the deployment. For example, gtw.
    4. Install ibm-datapower.yaml with the following command:
      kubectl -n <namespace> apply -f ibm-datapower.yaml
      Note: There is a known issue on Kubernetes version 1.19.4 or higher that can cause the DataPower operator to fail to start. In this case, the DataPower Operator pods can fail to schedule, and will display the status message: no nodes match pod topology spread constraints (missing required label). For example:
      0/15 nodes are available: 12 node(s) didn't match pod topology spread constraints (missing required label), 
      3 node(s) had taint { }, that the pod didn't tolerate.

      You can workaround the issue by editing the DataPower operator deployment and re-applying it, as follows:

      1. Delete the DataPower operator deployment, if deployed already:
        kubectl delete -f ibm-datapower.yaml -n <namespace>
      2. Open ibm-datapower.yaml, and locate the topologySpreadConstraints: section. For example:
        - maxSkew: 1
          topologyKey: zone
          whenUnsatisfiable: DoNotSchedule
      3. Replace the values for topologyKey: and whenUnsatisfiable: with the corrected values shown in the example below:
        - maxSkew: 1
          whenUnsatisfiable: ScheduleAnyway
      4. Save ibm-datapower.yaml and deploy the file to the cluster:
        kubectl apply -f ibm-datapower.yaml -n <namespace>
  5. Next, install the subsystems. Continue with Installing the API Connect subsystems.
    Note: Unless otherwise stated, when performing the individual subsystem upgrades the <namespace> value of example kubectl commands for a given subsystem should be set to the namespace for the particular subsystem component they are acting on.