Deploying operators in a multi-namespace API Connect cluster
Deploy the Kubernetes operator files in multi-namespace cluster.
Before you begin
- Ensure you completed the prerequisite tasks for installing operators. See Deploying operators and cert-manager.
- Be sure to review your strategy for using certificates with API Connect. See Deployment requirements.
About this task
- These instructions apply only to native k8s deployments. They do not apply to OpenShift deployments.
- The API Connect operator is deployed as a cluster-scoped operator which watches every namespace in the cluster. The API Connect operator can be installed in any namespace, although it is recommended that the API Connect operator is installed in its own dedicated namespace.
- The DataPower operator must be deployed in a namespace where gateway subsystem install is planned.
- Single ingress-ca certificate must be installed and used across the various namespaces where subsystems will be installed.
Procedure
- Prepare your environment:
- Ensure
KUBECONFIG
is set for the target cluster:export KUBECONFIG=<path_to_cluster_config_YAML_file>
An example path is
/Users/user/.kube/clusters/<cluster_name>/kube-config-<cluster_name>.yaml
- Create namespaces for the subsystems that will be installed:
- API Connect operator namespace
- Gateway subsystem namespace
- Management subsystem namespace
- Portal subsystem namespace
- Analytics subsystem namespace
Note: The following namespaces are reserved and must not be used for your installation:default
openshift-*
kube-*
The following example values and deployment are used throughout these instructions:
- An API Connect operator is installed in
operator
namespace - A DataPower operator and gateway subsystem is installed in
gtw
namespace - The Management subsystem is installed in
mgmt
namespace - The Portal subsystem is installed in
portal
namespace - The Analytics subsystem is installed in
a7s
namespace
- Multi-namespace templates are provided in
helper_files.zip
, inside therelease_files.zip
you downloaded in Obtaining product files:helper_files_unpack/multi-ns-support/
- Ensure
- Install cert-manager and configure certificates:
Cert-manager adds convenience to the generation and management of API Connect certificates. For more information about cert-manager, see Key Concepts: Cert-manager, Issuers, and Secrets.
You can obtain cert-manager v1.12.13 from the API Connect v10 distribution
helper_files.zip
archive, or download it from https://github.com/cert-manager/cert-manager/releases/tag/v1.12.13.- Install cert-manager version 1.12.13. Do not
specify a custom namespace because cert-manager will be installed in its own namespace.Note: If you are deploying a two data center disaster recovery deployment on Kubernetes, then do not do this step, follow the steps in Installing cert-manager and certificates in a two data center deployment, then return here.
- Apply the CR:
kubectl apply -f cert-manager-1.12.13.yaml
- Wait for cert-manager pods to enter
Running 1/1
status before proceeding. Use the following command to check:kubectl -n cert-manager get po
There are 3 cert-manager pods in total.
- Apply the CR:
- Install
ingress-ca
certificate:- Locate
ingress-ca
certificate athelper_files/multi-ns-support/ingress-ca-cert.yaml
- Create
ingress-ca
certificate in one of the subsystems. For example, if we choosemgmt
namespace:kubectl -n mgmt apply -f ingress-ca-cert.yaml
- Locate
- Install common issuers on all the namespaces:
- Locate
common-issuer.yaml
inhelper_files/multi-ns-support/
- Create
common-issuer.yaml
in all namespaces:kubectl create -f common-issuer.yaml -n mgmt kubectl create -f common-issuer.yaml -n gtw kubectl create -f common-issuer.yaml -n ptl kubectl create -f common-issuer.yaml -n a7s
- Locate
- Obtain the ingress-ca secret created by cert manager and create the secret on all
namespaces:Note: The instructions in this step result in having a
selfsigning-issuer
in every namespace. This issuer is typically referenced by an API Connect CR in the namespace with:... microServiceSecurity: certManager certManagerIssuer: name: selfsigning-issuer kind: Issuer ...
This issuer (and associated root certificate) does not need to be the same in every namespace, nor does it need to be the same for 2 CRs in the same namespace. Each subsystem can use its own
selfsigning-issuer
, because it is only used for certificates that are internal to the subsystem.To ensure that the ingress-issuer is in sync across namespaces, it should be backed by the same
ingress-ca
certificate. This enables, for example, APIM to use a client certificate that matches the CA of the portal-admin ingress endpoint. The sync is achieved by copying around theingress-ca
certificate in each namespace, as shown in the following steps. It does not matter that it is the same issuer, or whether it has a different name. However it is important is that the ingress-ca is the same for the all the "ingress-issuer" that are going to be used by the subsystems.- Make sure ingress-ca certificate READY status is set to
true:
kubectl get certificate ingress-ca -n mgmt NAME READY SECRET AGE ingress-ca True ingress-ca 9m24s
- Cert-manager creates a secret called
ingress-ca
inmgmt
namespace which represents the certificate we created in Step 2.c. - Use the following command to obtain the YAML format of the
secret:
kubectl get secret ingress-ca -n <namespace-where-ingress-ca-cert-is-created> -o yaml > ingress-ca-secret-in-cluster.yaml
- Remove unwanted content from the secret
YAML:
cat ingress-ca-secret-in-cluster.yaml | grep -v 'creationTimestamp' | grep -v 'namespace' | grep -v 'uid' | grep -v 'resourceVersion' > ingress-ca-secret.yaml
- Create the secret using the yaml we prepared in previous step on rest of the subsystem
namespaces. In this example, in the
gtw
,a7s
andptl
namespaces.kubectl create -f ingress-ca-secret.yaml -n gtw kubectl create -f ingress-ca-secret.yaml -n ptl kubectl create -f ingress-ca-secret.yaml -n a7s
- Make sure ingress-ca certificate READY status is set to
true:
- Install management certs in management namespace:
- Locate
management-certs.yaml
inhelper_files/multi-ns-support/
- Create
management-certs.yaml
in themgmt
namespace:kubectl create -f management-certs.yaml -n mgmt
- Locate
- Install gateway certs in the gateway namespace:
- Locate
gateway-certs.yaml
inhelper_files/multi-ns-support/
- Create
gateway-certs.yaml
in thegtw
namespace:kubectl create -f gateway-certs.yaml -n gtw
- Locate
- Install cert-manager version 1.12.13. Do not
specify a custom namespace because cert-manager will be installed in its own namespace.
- Install the API Connect operator:
- Install the ibm-apiconnect CRDs with the following command. Do not specify a
namespace:
kubectl apply -f ibm-apiconnect-crds.yaml
- Create a registry secret with the credentials to be used to pull down product images with the
following command, replacing
<namespace>
with the desired namespace for the apiconnect operator deployment.kubectl -n <namespace> create secret docker-registry apic-registry-secret --docker-server=<registry_server> --docker-username=<username@example.com> --docker-password=<password> --docker-email=<username@example.com>
- For example, replace
<namespace>
withoperator
. docker-password
can be your artifactory API key.-n <namespace>
can be omitted ifdefault
namespace is being used for installation
- For example, replace
- Locate and open
ibm-apiconnect-distributed.yaml
in a text editor of choice. Then, find and replace each occurrence of$OPERATOR_NAMESPACE
with<namespace>
, replacing<namespace>
with the desired namespace for the deployment. In this example, the namespace isoperator
. - Also in
ibm-apiconnect-distributed.yaml
, locate theimage:
keys in the containers sections of the deployment yaml right belowimagePullSecrets:
. Replace the placeholder valuesREPLACE-DOCKER-REGISTRY
of theimage:
keys with the docker registry host location of the apiconnect operator image (either uploaded to own registry or pulled from public registry). - Install
ibm-apiconnect-distributed.yaml
with the following commandkubectl apply -f ibm-apiconnect-distributed.yaml
- Install the ibm-apiconnect CRDs with the following command. Do not specify a
namespace:
- Install DataPower operator:
Deployment of the DataPower operator is only needed if you have at least one API Connect v10 Gateway subsystem (whether v5 compatible or not) to upgrade. If you are using DataPower in a different form factor such as an Appliance, you will not have an API Connect v10 Gateway subsystem to upgrade, and will not need the DataPower operator for your upgrade.
Note: The DataPower Operator supports multiple instances of DataPower in the same cluster, separated by namespace. When deploying into multiple namespaces, ensure that any cluster-scoped resources are created with unique names for separate installations, such that they are not overwritten by subsequent installations. For example, DataPower cluster-scoped resources includeClusterRoleBindings
.- Create a registry secret for the DataPower registry with the credentials to be used to pull down
product images with the following command, replacing
<namespace>
with the desired namespace for the deployment .(In this casegtw
):kubectl -n <namespace> create secret docker-registry datapower-docker-local-cred --docker-server=<registry_server> --docker-username=<username@example.com> --docker-password=<password> --docker-email=<username@example.com>
- For example, replace
<namespace>
withgtw
. docker-password
can be your artifactory API key.-n <namespace>
can be omitted ifdefault
namespace is being used for installation
- For example, replace
- Create a DataPower admin secret, replacing
<namespace>
with the desired namespace for the deployment. For example,gtw
. This secret will be used for$ADMIN_USER_SECRET
later in the Gateway CR:kubectl -n <namespace> create secret generic datapower-admin-credentials --from-literal=password=admin
-n <namespace>
can be omitted ifdefault
namespace is being used for installation. - Locate and open
in a text editor of choice. Then, find and replace each occurrence ofibm-datapower-distributed.yaml
default
with<namespace>
, replacing<namespace>
with the desired namespace for the deployment. For example,gtw
. - Install
ibm-datapower-distributed.yaml
with the following command:kubectl -n <namespace> apply -f ibm-datapower-distributed.yaml
Note: There is a known issue on Kubernetes version 1.19.4 or higher that can cause the DataPower operator to fail to start. In this case, the DataPower Operator pods can fail to schedule, and will display the status message:no nodes match pod topology spread constraints (missing required label)
. For example:0/15 nodes are available: 12 node(s) didn't match pod topology spread constraints (missing required label), 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
You can workaround the issue by editing the DataPower operator deployment and re-applying it, as follows:
- Delete the DataPower operator deployment, if deployed
already:
kubectl delete -f ibm-datapower-distributed.yaml -n <namespace>
- Open
ibm-datapower-distributed.yaml
, and locate thetopologySpreadConstraints:
section. For example:topologySpreadConstraints: - maxSkew: 1 topologyKey: zone whenUnsatisfiable: DoNotSchedule
- Replace the values for
topologyKey:
andwhenUnsatisfiable:
with the corrected values shown in the example below:topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway
- Save
ibm-datapower-distributed.yaml
and deploy the file to the cluster:kubectl apply -f ibm-datapower-distributed.yaml -n <namespace>
- Create a registry secret for the DataPower registry with the credentials to be used to pull down
product images with the following command, replacing
- Next, install the subsystems. Continue with Installing the API Connect subsystems. Note: Unless otherwise stated, when performing the individual subsystem upgrades the
<namespace>
value of examplekubectl
commands for a given subsystem should be set to the namespace for the particular subsystem component they are acting on.