Installing the Analytics subsystem in a shared namespace

Install the analytics subsystem by creating and applying the analytics_cr.yaml file.

Before you begin

Complete the following tasks to prepare for deploying API Connect:

  1. Preparing for installation
  2. Installing operators
  3. Setting up a certificate issuer

About this task

Use the OpenShift CLI to edit the custom resource template for the analytics subsystem, apply the resource, verify that the pods are up and running.

Procedure

  1. Create a file that is called analytics_cr.yaml and paste in these contents:
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    # 
    #     http://www.apache.org/licenses/LICENSE-2.0
    # 
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    #
    
    apiVersion: analytics.apiconnect.ibm.com/v1beta1
    kind: AnalyticsCluster
    metadata:
      name: analytics
      labels: {
        app.kubernetes.io/instance: "analytics",
        app.kubernetes.io/managed-by: "ibm-apiconnect",
        app.kubernetes.io/name: "analytics"
      }
    spec:
      version: $APP_PRODUCT_VERSION
      license:
        accept: $LICENSE_ACCEPTANCE
        use: $LICENSE_USE
        license: '$LICENSE_ID'
      profile: $PROFILE
      microServiceSecurity: certManager
      certManagerIssuer:
        name: selfsigning-issuer
        kind: Issuer
      ingestion:
        endpoint:
          annotations:
            cert-manager.io/issuer: ingress-issuer
          hosts: 
          - name: ai.$STACK_HOST
            secretName: analytics-ai-endpoint
        clientSubjectDN: CN=analytics-ingestion-client,O=cert-manager
      storage:
        type: shared # change to 'dedicated' if you want separate master and storage pods. Three-replica deployments only.
        shared:
          volumeClaimTemplate:
            storageClassName: $STORAGE_CLASS
            volumeSize: $DATA_VOLUME_SIZE 
        # master: # uncomment this section if you set storage.type = dedicated.
        #   volumeClaimTemplate:
        #     storageClassName: $STORAGE_CLASS
       
  2. Edit the YAML file and set the variables:
    $APP_PRODUCT_VERSION
    API Connect application version for the subsystems.
    version: <version_number>

    Example version number: 10.0.5.7

    $PROFILE

    Specify your analytics subsystem profile, where n indicates number of replicas, c number of cores, and m is the minimum memory allocation in GB. For more information on profiles, see Planning your deployment topology.

    $STACK_HOST
    The desired ingress subdomain for the API Connect stack. Used when specifying endpoints. Domain names that are used for endpoints cannot contain the underscore "_" character. You can customize the subdomain or the complete hostname:
    • Subdomain customization only

      Accept the prefixes predefined for the ingress hostnames to use and replace all instances of $STACK_HOST with the desired ingress subdomain for the API Connect stack. For example, if your host is myhost.subnet.example.com:

      
        ingestion:
          endpoint:
             < ... >
            hosts: 
            - name: ai.myhost.subnet.example.com
              secret: analytics-ai-endpoint
             < .... >
    • Complete hostname customization

      Change both the predefined prefixes and the $STACK_HOST subdomain to match your desired hostnames.

      For example, you can replace $STACK_HOST with my.analytics.ingestion.myhost.subnet.example.com, where my.analytics.ingestion replaces ai, and myhost.subnet.example.com replaces STACK_HOST.

      ingestion:
          endpoint: 
            < ... >
            hosts:
            - name: my.analytics.ingestion.myhost.subnet.example.com
              secret: analytics-ai-endpoint 
             < ... >
        
    $STORAGE_CLASS
    The Kubernetes storage class to be used for persistent volume claims (for more information, see Planning the analytics profile, storage class, and storage type). Find the available storage classes in the target cluster by running the following command: oc get sc. Example:
    storage:
        type: shared
        shared:
          volumeClaimTemplate:
            storageClassName: ceph-block
    $DATA_VOLUME_SIZE

    Size of storage allocated for data. To estimate the storage space you require, see Estimating internal storage space. If $DATA_VOLUME_SIZE is not specified, it defaults to 50Gi. See Reference: Fields in the Analytics CR. Example:

    storage:
        type: shared
        shared:
          volumeClaimTemplate:
            storageClassName: ceph-block
            volumeSize: 100Gi
    $LICENSE_ACCEPTANCE
    Set accept to true. You must accept the license to successfully deploy API Connect.
    $LICENSE_USE
    Set use to either production or nonproduction to match the license that you purchased.
    $LICENSE_ID
    Set license: to the license ID for the version of API Connect that you purchased. See API Connect licenses.
  3. Optional: If you are installing Cloud Pak for Integration 2022.2.1 or 2022.4.1 and don't want to have Zen endpoints, add the following annotation to the metadata section of the CR:
    metadata:
      annotations:
        apiconnect-operator/cp4i: "false"
  4. Optional: Three replica deployments only: Enable dedicated storage.
    If you want to use dedicated storage, update the spec.storage section of the yaml file as follows:
      storage:
        type: dedicated
        shared:
          volumeClaimTemplate:
            storageClassName: $STORAGE_CLASS
            volumeSize: $DATA_VOLUME_SIZE
        master:
          volumeClaimTemplate:
            storageClassName: $STORAGE_CLASS
    For more information about dedicated storage, see Dedicated storage instead of shared storage.
  5. Optional: If you want to disable mTLS for communications between the management and analytics subsystem, and between the gateway and analytics subsystem, and enable JWT instead, then add and set the properties mtlsValidateClient and jwksUrl.
    spec:
      ...
      mtlsValidateClient: false
      jwksUrl: <JWKS URL>
    where <JWKS URL> is the URL of the JWKS endpoint hosted on the management subsystems. To find out the jwksUrl, describe the management CR and check the status: section:
    kubectl describe mgmt -n <namespace>
    ...
    status:
      - name: jwksUrl
        secretName: api-endpoint
        type: API
        uri: https://api.apic.acme.com/api/cloud/oauth2/certs
    For more information on JWT security, see Enable JWT security instead of mTLS.
    Note: It is not possible to use JWT on the V5 compatible gateway to analytics message flow.
  6. If you want to enable additional analytics features, you can set them in the analytics_cr.yaml file as described here: Analytics subsystem CR configured settings
  7. Install the analytics subsystem by applying the modified CR with the following command:
    oc apply -f analytics_cr.yaml -n <namespace>
  8. Verify that the analytics subsystem is fully installed:
    oc get AnalyticsCluster -n <namespace>

    The installation is complete when READY shows all pods running (n/n), and the STATUS reports Running. Example:

    NAME        READY   STATUS    VERSION          RECONCILED VERSION   AGE
    analytics   n/n     Running   10.0.5.3   10.0.5.3-1281    86m

    It is not necessary to wait for analytics installation to complete before you move on to the next subsystem installation.

What to do next

If you are creating a new deployment of API Connect, install other subsystems as needed.

When you have completed the installation of all required API Connect subsystems, you can proceed to defining your API Connect configuration by using the API Connect Cloud Manager; refer to the Cloud Manager configuration checklist.