Deploying the API Analytics subsystem on Linux® x86_64 (CLI)
Deploy the individual API Analytics subsystem. This procedure uses the CLI.
API Connect subsystems can be installed by creating individual subsystem custom resources (CRs), instead of using an API Connect cluster.
There are four subsystems that can be installed:
API Manager. For more information, see Deploying the API Manager subsystem on Linux x86_64 (CLI)
API Analytics (discussed in this procedure)
API Portal. For more information, see Deploying the API Portal subsystem on Linux x86_64 (CLI)
API Gateway. For more information, see Deploying the API Gateway subsystem on Linux x86_64 (CLI)
Deploying API Analytics involves three main tasks:
Installing the certificate manager in the subsystem namespace
Log in to your cluster with your OpenShift user credentials:
oc login
If you installed the operators in All namespaces on the cluster mode, you need to use a project other than
openshift-operators
in which to deploy the instances.If needed, create a new project in which to create the Operand Request object:
oc new-project <project_name>
For example:
oc new-project integration
Create a file called
operand-request.yaml
and add the following content:apiVersion: operator.ibm.com/v1alpha1 kind: OperandRequest metadata: name: ibm-apiconnect-cert-manager spec: requests: - operands: - name: ibm-cert-manager-operator registry: common-service registryNamespace: ibm-common-services
Create the resource:
oc apply -f operand-request.yaml
Setting up the certificates
Change to the namespace where you want to install the subsystem:
oc project <namespace>
If you have installed the API Manager subsystem in another namespace, extract the API Manager
ingress-ca
certificates.oc -n <api manager namespace> get secret ingress-ca -o yaml > ingress-ca.yaml
Edit the
ingress-ca.yaml
file and remove the following properties:metadata.creationTimestamp
metadata.namespace
metadata.resourceVersion
metadata.uid
metadata.selfLink
Keep this file. You will need to apply it in the namespaces of your other subsystems so that they can communicate with the API Manager subsystem.
Apply the file to the API Analytics namespace:
oc apply -f ingress-ca.yaml
Create a file that is called
api-analytics-certs.yaml
and paste in the following contents:apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigning-issuer labels: { app.kubernetes.io/instance: "api-manager", app.kubernetes.io/managed-by: "ibm-apiconnect", app.kubernetes.io/name: "selfsigning-issuer" } spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: ingress-issuer labels: { app.kubernetes.io/instance: "api-manager", app.kubernetes.io/managed-by: "ibm-apiconnect", app.kubernetes.io/name: "ingress-issuer" } spec: ca: secretName: ingress-ca
Apply the file to your namespace:
oc apply -f api-analytics-certs.yaml
Verify that the command installation succeeded:
oc get issuers
All issuers created successfully:
NAME READY ingress-issuer True selfsigning-issuer True
Deploying API Analytics
Create a
AnalyticsCluster
YAML file. For example, you can create a file that is calledapi-analytics.yaml
with the following example configuration. All fields in the example are required. Update the values as applicable for your configuration:apiVersion: analytics.apiconnect.ibm.com/v1beta1 kind: AnalyticsCluster metadata: name: api-analytics labels: { app.kubernetes.io/instance: "api-analytics", app.kubernetes.io/managed-by: "ibm-apiconnect", app.kubernetes.io/name: "api-analytics" } annotations: { apiconnect-operator/cp4i: "false" } spec: version: 10.0.6.0 license: accept: false use: production license: L-KZXM-S7SNCU profile: n1xc2.m16 microServiceSecurity: certManager certManagerIssuer: name: selfsigning-issuer kind: Issuer ingestion: endpoint: annotations: cert-manager.io/issuer: ingress-issuer hosts: - name: ai.$STACK_HOST secretName: analytics-ai-endpoint clientSubjectDN: CN=analytics-ingestion-client,O=cert-manager storage: type: shared shared: volumeClaimTemplate: storageClassName: <storage-class> volumeSize: 50Gi
Change the value of
spec.license.accept
totrue
if you accept the license agreement. For more information, see Licensing.Do not remove the annotation of
apiconnect-operator/cp4i: "false"
. This annonation guarantees that API Analytics does not attempt to integration with the Platform UI. This integration is not supported.For
spec.license.use
enterproduction
ornonproduction
to match the type of license that you purchased.For
spec.license.license
enter the license ID for the API Connect program that you purchased. To get the available license IDs, see API Connect licenses in the API Connect documentation.For
spec.profile
enter the type of installation profile that you want. For more information, see API Connect deployment profiles for OpenShift and Cloud Pak for Integration.For
spec.version
enter the API Connect product version or channel to be installed.For
<storage-class>
, specify the RWO block storage class to use for persistence storage. For more information about selecting storage classes for Cloud Pak for Integration, see Storage considerations. To review API Connect storage support, see the "Supported storage types" section in Deployment requirements, in the API Connect documentation.Replace
$STACK_HOST
with the desired ingress subdomain for the API Connect stack. This variable is used when specifying endpoints. Domain names that are used for endpoints cannot contain the underscore "_" character. The host on OpenShift is typically prefixed byapps
. Such asapps.subnet.example.com
.If you want to enable additional analytics features, you can set them as described in Analytics subsystem CR configured settings.
Optional: Three replica deployments only: Enable dedicated storage. If you want to use dedicated storage, update the
spec.storage
section of the yaml file as follows:storage: type: dedicated shared: volumeClaimTemplate: storageClassName: <storage-class> volumeSize: 50Gi master: volumeClaimTemplate: storageClassName: <storage-class>
For more information about dedicated storage, see Dedicated storage instead of shared storage.
Apply the YAML file to the cluster:
oc apply -f api-analytics.yaml
Check the status of API Analytics by running the following command in the project (namespace) where it was deployed:
oc get AnalyticsCluster
The installation is complete when the READY status changes to
True
, and the SUMMARY reports that all services are online:NAME READY SUMMARY VERSION RECONCILED VERSION AGE api-analytics True 6/1 <version> <version-build> 7m17s
What's next?
Install other subsystems as needed.
When you have completed the installation of all required API Connect subsystems, you can proceed to defining your API Connect configuration by using the API Connect Cloud Manager; refer to the Cloud Manager configuration checklist.