App Connect Integration Server reference

Introduction

The App Connect Integration Server API enables you to create integration servers, which run integrations that were created in App Connect Designer or IBM® App Connect Enterprise Toolkit.

Prerequisites

Red Hat OpenShift SecurityContextConstraints requirements

IBM App Connect runs under the default restricted SecurityContextConstraints.

Resources required

Minimum recommended requirements:

  • Toolkit integration for compiled BAR files:
    • CPU: 0.1 Cores
    • Memory: 0.35 GB
  • Toolkit integration for uncompiled BAR files:
    • CPU: 0.3 Cores
    • Memory: 0.35 GB
  • Designer-only integration or hybrid integration:
    • CPU: 1.7 Cores
    • Memory: 1.77 GB

For information about how to configure these values, see Custom resource values.

If you are building and running your own containers, you can choose to allocate less than 0.1 Cores for Toolkit integrations if necessary. However, this decrease in CPU for the integration server container might impact the startup times and performance of your flow. If you begin to experience issues that are related to performance, or with starting and running your integration server, check whether the problem persists by first increasing the CPU to at least 0.1 Cores before contacting IBM support.

If you want to increase the download speed of an integration server's trace log files, you can choose to allocate more CPU. For information about enabling, disabling, downloading, and clearing trace, see Trace reference and Enabling and managing trace for a deployed integration server.

Mechanisms for providing BAR files to an integration server

Integration servers and integration runtimes require two types of resources: BAR files that contain development resources, and configuration files (or objects) for setting up the integration servers or integration runtimes. When you create an integration server or integration runtime, you are required to specify one or more BAR files that contain the development resources of the App Connect Designer or IBM App Connect Enterprise Toolkit integrations that you want to deploy.

A number of mechanisms are available for providing these BAR files to integration servers and integration runtimes. Choose the mechanism that meets your requirements.

Mechanism Description BAR files per integration server or integration runtime

Content server

When you use the App Connect Dashboard to upload or import BAR files for deployment to integration servers or integration runtimes, the BAR files are stored in a content server that is associated with the App Connect Dashboard instance. The content server is created as a container in the App Connect Dashboard deployment and can either store uploaded (or imported) BAR files in a volume in the container’s file system, or store them within a bucket in a simple storage service that provides object storage through a web interface.

The location of a BAR file in the content server is generated as a BAR URL when a BAR file is uploaded or imported to the Dashboard. This location is specified by using the Bar URL field or spec.barURL parameter.

While creating an integration server or integration runtime, you can choose only one BAR file to deploy from the content server and must reference its BAR URL in the content server. The integration server or runtime then uses this BAR URL to download the BAR file on startup, and processes the applications appropriately.

If you are creating an integration server or integration runtime from the Dashboard, and use the Integrations view to specify a single BAR file to deploy, the location of this file in the content server will be automatically set in the Bar URL field or spec.barURL parameter in the Properties (or Server) view. For more information, see Creating an integration server to run your BAR file resources (for Designer integrations), Creating an integration server to run IBM App Connect Enterprise Toolkit integrations, and Creating an integration runtime to run your BAR file resources.

If you are creating an integration server or integration runtime from the Red Hat OpenShift web console or CLI, or the Kubernetes CLI, and want to deploy a BAR file from the content server, you must obtain the BAR file location from the BAR files page (which presents a view of the content server) in the Dashboard. You can do so by using Display BAR URL in the BAR file's options menu to view and copy the supplied URL. You can then paste this value in spec.barURL in the integration server or integration runtime custom resource (CR). For more information, see Integration Server reference: Creating an instance and Integration Runtime reference: Creating an instance.

The location of a BAR file in the content server is typically generated in the following format:

  • Integration server:

    https://dashboardName-dash:3443/path?token

  • Integration runtime:

    https://dashboardName-dash.namespaceName:3443/path?token

Where:
  • dashboardName is the Dashboard name (that is, the metadata.name value).
  • path is a generated (and static) path.
  • token is a generated (and static) token. (This token is also stored in the content server.)
  • namespaceName is the namespace (or project) where the Dashboard is deployed.

For example:

https://mydashboardname-dash:3443/v1/directories/CustomerDbV1?0a892497-ea3b-4961-aefb-bc0c36479678

https://mydashboardname-dash.ace-test:3443/v1/directories/CustomerDatabaseV1?9b7aa053-656d-4a30-a31c-123a45f8ebfd

1

External repository

(Applicable only if spec.version resolves to 12.0.1.0-r1 or later)

While creating an integration server or integration runtime, you can choose to deploy multiple BAR files, which are stored in an external HTTPS repository system, to the integration server or integration runtime. You might find this option useful if you have set up continuous integration and continuous delivery (CI/CD) pipelines to automate and manage your DevOps processes, and are building and storing BAR files in a repository system such as JFrog Artifactory.

This option enables you to directly reference one or more BAR files in your integration server or integration runtime CR without the need to manually upload or import the BAR files to the content server in the App Connect Dashboard or build a custom image. You will need to provide basic (or alternative) authentication credentials for connecting to the external endpoint where the BAR files are stored, and can do so by creating a configuration object of type BarAuth. When you create your integration server or integration runtime, you must then reference this configuration object.

If you are creating an integration server or integration runtime from the Dashboard, you can use the Configuration view to create (and select) a configuration object of type BarAuth that defines the required credentials. You can then use the Properties (or Server) view to specify the endpoint locations of one or more BAR files in the Bar URL field or as the spec.barURL value. If you want to be able to use the App Connect Dashboard to view your integration server or integration runtime, also set the following parameter:
  • Integration server:

    Ensure that spec.createDashboardUsers is set to true.

  • Integration runtime:

    Ensure that spec.dashboardUsers.bypassGenerate is set to false.

For more information, see BarAuth type, Creating an integration server to run your BAR file resources (for Designer integrations), Creating an integration server to run IBM App Connect Enterprise Toolkit integrations, and Creating an integration runtime to run your BAR file resources.
If you are creating an integration server or integration runtime from the Red Hat OpenShift web console or CLI, or the Kubernetes CLI, you must create a configuration object of type BarAuth that defines the required credentials, as described in Configuration reference and BarAuth type. When you create the integration server or integration runtime CR, you must specify the name of the configuration object in spec.configurations and then specify the endpoint locations of one or more BAR files in spec.barURL. If you want to be able to use the App Connect Dashboard to view your integration server or integration runtime, also set the following parameter:
  • Integration server:

    Ensure that spec.createDashboardUsers is set to true.

  • Integration runtime:

    Ensure that spec.dashboardUsers.bypassGenerate is set to false.

For more information, see Integration Server reference: Creating an instance and Integration Runtime reference: Creating an instance.
You can specify multiple BAR files as follows:
  • Integration server:

    Specify the URLs in the Bar URL field or in spec.barURL by using a comma-separated list; for example:

    https://artifactory.com/myrepo/getHostAPI.bar,https://artifactory.com/myrepo/CustomerDbV1.bar

  • Integration runtime:
    Specify each URL in a separate Bar URL field by using the Add button, or specify the URLs in spec.barURL as shown in the following example:
    spec:
      barURL:
        - 'https://artifactory.com/myrepo/getHostAPI.bar'
        - 'https://artifactory.com/myrepo/CustomerDbV1.bar'
Tip: If you are using GitHub as an external repository, you must specify the raw URL in the Bar URL field or in spec.barURL. For example:
https://raw.github.ibm.com/somedir/main/bars/getHostAPI.bar
https://github.com/johndoe/somedir/raw/main/getHostAPI.bar
https://raw.githubusercontent.com/myusername/myrepo/main/My%20API.bar

Some considerations apply if deploying multiple BAR files:

  • Ensure that all of the applications can coexist (with no names that clash).
  • Ensure that you provide all of the configurations that are needed for all of the BAR files.
  • All of the BAR files must be accessible by using the single set of credentials that are specified in the configuration object of type BarAuth.

Multiple

Custom image

You can build a custom server runtime image that contains all the configuration for the integration server or integration runtime, including all the BAR files or applications that are required, and then use this image to deploy an integration server or integration runtime.

When you create the integration server or integration runtime CR, you must reference this image by using the following parameter:
  • Integration server:

    spec.pod.containers.runtime.image

  • Integration runtime:

    spec.template.spec.containers[].image

For example:

image-registry.openshift-image-registry.svc:5000/imageName

This image must be built from the version that is specified as the spec.version value in the CR. Channels are not supported when custom images are used.

Multiple

Creating an instance

You can create an integration server by using the Red Hat OpenShift web console or CLI, or the CLI for a Kubernetes environment.

The supplied App Connect Enterprise base image includes an IBM MQ client for connecting to remote queue managers that are within the same Red Hat OpenShift cluster as your deployed integration servers, or in an external system such as an appliance.

Before you begin

  • Ensure that the Prerequisites are met.
  • Prepare the BAR files that you want to deploy to the integration server. For more information, see Mechanisms for providing BAR files to an integration server.
  • Decide how to control upgrades to the instance when a new version becomes available. The spec.version value that you specify while creating the instance will determine how that instance is upgraded after installation, and whether you will need to specify a different license or version number for the upgrade. To help you decide whether to specify a spec.version value that either lets you subscribe to a channel for updates, or that uses a specific version for the instance, review the Upgrade considerations for channels, versions, and licenses before you start this task.
    Namespace restriction for an instance, server, configuration, or trace:

    The namespace in which you create an instance or object must be no more than 40 characters in length.

Creating an instance from the Red Hat OpenShift web console

When you create an integration server, you can define which configurations you want to apply to the integration sever.

  • If required, you can create configuration objects before you create an integration server and then add references to those objects while creating the integration sever. For information about how to use the Red Hat OpenShift web console to create a configuration object before you create an integration server, see Creating a configuration object from the Red Hat OpenShift web console.
  • If you have existing configuration objects that you want the integration sever to reference, you can add those references while creating the integration server, as described in the steps that follow.

To create an integration server by using the Red Hat OpenShift web console, complete the following steps:

  1. From a browser window, log in to the Red Hat OpenShift Container Platform web console. Ensure that you are in the Administrator perspective Administrator perspective of the web console.
  2. From the navigation, click Operators > Installed Operators.
  3. If required, select the namespace (project) in which you installed the IBM App Connect Operator.
  4. From the Installed Operators page, click IBM App Connect.
  5. From the Operator details page for the App Connect Operator, click the Integration Server tab. Any previously created integration servers are displayed in a table.
  6. Click Create IntegrationServer. Switch to the YAML view if necessary for a finer level of control over your installation settings. The minimum custom resource (CR) definition that is required to create an integration server is displayed.

    From the Details tab on the Operator details page, you can also locate the Integration Server tile and click Create instance to specify installation settings for the integration server.

  7. Update the content of the YAML editor with the parameters and values that you require for this integration server.
    • To view the full set of parameters and values available, see Custom resource values.
    • For licensing information, see Licensing reference for IBM App Connect Operator.
    • Specify the locations of one or more BAR files that you want to deploy. You can use the spec.barURL parameter to either specify the URL to a BAR file that is stored in the content server, or specify a comma-separated list of one or more BAR files in an external endpoint. If you are deploying BAR files that are stored in an external endpoint, you will also need a configuration object of type BarAuth that contains credentials for connecting to this endpoint. For more information, see Mechanisms for providing BAR files to an integration server.
    • You can specify one or more (existing) configurations that you want to apply by using the spec.configurations parameter. For example:
      spec:
        configurations:
          - odbc-ini-data

      or

      spec:
        configurations:
          - odbc-ini-data
          - accountsdata

      or

      spec:
        configurations: ["odbc-ini-data", "accountsdata"]

      For the spec.configurations values, specify the metadata.name values for the relevant configuration objects.

      Note:

      If this integration server contains a callable flow, you must configure the integration server to use a switch server that you created earlier. For information about how to create a switch server, see App Connect Switch Server reference. Locate the name of the switch server and then use it to configure the integration server as follows:

      1. From the command line, run the following command, where switchName is the metadata.name value that was specified while creating the switch server:
        oc get switchserver switchName
      2. Make a note of the AGENTCONFIGURATIONNAME value that is shown in the output.
        Output of the oc get switchserver command
      3. Add the AGENTCONFIGURATIONNAME value to the spec.configurations parameter; for example:
        configurations:
          - mark-101-switch-agentx

      A configuration object of type REST Admin SSL files will be automatically created and applied to the integration server when spec.adminServerSecure is set to true. This default setting generates a configuration object by using a predefined ZIP file that contains self-signed certificates, and also creates a secret to store the ZIP file contents. For more information, see the spec.adminServerSecure description under Custom resource values, and Creating an instance.

  8. To use the Form view, ensure that Form view is selected and then complete the fields. Note that some fields might not be represented in the form.
  9. Click Create to start the deployment. An entry for the integration server is shown in the IntegrationServers table, initially with a Pending status.
  10. Click the integration server name to view information about its definition and current status.

    On the Details tab of the page, the Conditions section reveals the progress of the deployment. You can use the breadcrumb trail to return to the (previous) Operator details page for the App Connect Operator.

    When the deployment is complete, the status is shown as Ready in the IntegrationServers table.

Creating an instance from the Red Hat OpenShift CLI or Kubernetes CLI

When you create an integration server, you can define which configurations you want to apply to the integration sever.

  • If required, you can create configuration objects before you create an integration server and then add references to those objects while creating the integration sever. For information about how to use the CLI to create a configuration object before you create an integration server, see Creating a configuration object from the Red Hat OpenShift CLI.
  • If you have existing configuration objects that you want the integration sever to reference, you can add those references while creating the integration server, as described in the steps that follow.

To create an integration server by using the Red Hat OpenShift CLI, complete the following steps.

Note: These instructions relate to the Red Hat OpenShift CLI, but can be applied to a Kubernetes environment by using the relevant command to log in to the cluster, and substituting oc with kubectl where appropriate.
  1. From your local computer, create a YAML file that contains the configuration for the integration server that you want to create. Include the metadata.namespace parameter to identify the namespace in which you want to create the integration server; this should be the same namespace where the other App Connect instances or resources are created.
    OpenShift-only contentExample 1:
    apiVersion: appconnect.ibm.com/v1beta1
    kind: IntegrationServer
    metadata:
      name: customerapi
      namespace: mynamespace
    spec:
      license:
        accept: true
        license: L-KZVS-2G3FW4
        use: CloudPakForIntegrationNonProduction
      pod:
        containers:
          runtime:
            resources:
              limits:
                cpu: 300m
                memory: 368Mi
              requests:
                cpu: 300m
                memory: 368Mi
      adminServerSecure: true
      enableMetrics: true
      createDashboardUsers: true
      barURL: https://contentserverdash-ibm-ace-dashboard-prod:3443/v1/directories/CustomerDatabaseV1?12345678-abf5-491d-be0e-219abcde2338
      router:
        timeout: 120s
      designerFlowsOperationMode: local
      designerFlowsType: event-driven-or-api-flows
      service:
        endpointType: http
      version: '12.0'
      replicas: 3
      logFormat: basic
      configurations: ["my-odbc", "my-setdbp", "my-accounts"]
    
    Kubernetes-only contentOpenShift-only contentExample 2:
    apiVersion: appconnect.ibm.com/v1beta1
    kind: IntegrationServer
    metadata:
      name: customerapi
      namespace: mynamespace
    spec:
      license:
        accept: true
        license: L-KZVS-2G3FW4
        use: AppConnectEnterpriseProduction
      pod:
        containers:
          runtime:
            resources:
              limits:
                cpu: 300m
                memory: 368Mi
              requests:
                cpu: 300m
                memory: 368Mi
      adminServerSecure: true
      enableMetrics: true
      createDashboardUsers: true
      barURL: https://contentserverdash-ibm-ace-dashboard-prod:3443/v1/directories/CustomerDatabaseV1?12345678-abf5-491d-be0e-219abcde2338
      router:
        timeout: 120s
      designerFlowsOperationMode: local
      designerFlowsType: event-driven-or-api-flows
      service:
        endpointType: http
      version: '12.0'
      replicas: 3
      logFormat: basic
      configurations: ["my-odbc", "my-setdbp", "my-accounts"]
    
    • To view the full set of parameters and values that you can specify, see Custom resource values.
    • For licensing information, see Licensing reference for IBM App Connect Operator.
    • Specify the locations of one or more BAR files that you want to deploy. You can use the spec.barURL parameter to either specify the URL to a BAR file that is stored in the content server, or specify a comma-separated list of one or more BAR files in an external endpoint. If you are deploying BAR files that are stored in an external endpoint, you will also need a configuration object of type BarAuth that contains credentials for connecting to this endpoint. For more information, see Mechanisms for providing BAR files to an integration server.
    • You can specify one or more (existing) configurations that you want to apply by using the spec.configurations parameter. For example:
      spec:
        configurations:
          - odbc-ini-data

      or

      spec:
        configurations:
          - odbc-ini-data
          - accountsdata

      or

      spec:
        configurations: ["odbc-ini-data", "accountsdata"]

      For the spec.configurations values, specify the metadata.name values for the relevant configuration objects.

      Note:

      If this integration server contains a callable flow, you must configure the integration server to use a switch server that you created earlier. For information about how to create a switch server, see App Connect Switch Server reference. Locate the name of the switch server and then use it to configure the integration server as follows:

      1. From the command line, run the following command, where switchName is the metadata.name value that was specified while creating the switch server:
        oc get switchserver switchName
      2. Make a note of the AGENTCONFIGURATIONNAME value that is shown in the output.
        Output of the oc get switchserver command
      3. Add the AGENTCONFIGURATIONNAME value to the spec.configurations parameter; for example:
        configurations:
          - mark-101-switch-agentx

      A configuration object of type REST Admin SSL files will be automatically created and applied to the integration server when spec.adminServerSecure is set to true. This default setting generates a configuration object by using a predefined ZIP file that contains self-signed certificates, and also creates a secret to store the ZIP file contents. For more information, see the spec.adminServerSecure description under Custom resource values, and Creating an instance.

    You can also choose to define the configurations that you want to apply to the integration server within the same YAML file that contains the integration server configuration.

    If preferred, you can define multiple configurations and integration servers within the same YAML file. Each definition can be separated with three hyphens (---) as shown in the following example. The configurations and integration servers will be created independently, but any configurations that you specify for an integration server will be applied during deployment. (In the following example, settings are defined for a new configuration and an integration server. The integration server's spec.configurations setting references the new configuration and an existing configuration that should be applied during deployment.)

    apiVersion: appconnect.ibm.com/v1beta1
    kind: Configuration
    metadata:
      name: setdbp-conf
      namespace: mynamespace
    spec:
      data: ABCDefghIJLOMNehorewirpewpTEV843BCDefghIJLOMNorewirIJLOMNeh842lkalkkrmwo4tkjlfgBCDefghIJLOMNehhIJLOM
      type: setdbparms
    ---
    apiVersion: appconnect.ibm.com/v1beta1
    kind: IntegrationServer
    metadata:
      name: customerapi
      namespace: mynamespace
    spec:
      license:
        accept: true
        license: L-KZVS-2G3FW4
        use: CloudPakForIntegrationNonProduction
      pod:
        containers:
          runtime:
            resources:
              limits:
                cpu: 300m
                memory: 368Mi
              requests:
                cpu: 300m
                memory: 368Mi
      adminServerSecure: true
      enableMetrics: true
      createDashboardUsers: true
      router:
        timeout: 120s
      barURL: https://contentserverdash-ibm-ace-dashboard-prod:3443/v1/directories/CustomerDatabaseV1?12345678-abf5-491d-be0e-219abcde2338
      designerFlowsOperationMode: local
      designerFlowsType: event-driven-or-api-flows
      service:
        endpointType: http
      version: '12.0'
      logFormat: json
      replicas: 3
      configurations: ["setdbp-conf", "my-accounts"]
    
  2. Save this file with a .yaml extension; for example, customerapi_cr.yaml.
  3. From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
  4. Run the following command to create the integration server and apply any defined configurations. (Use the name of the .yaml file that you created.)
    oc apply -f customerapi_cr.yaml
  5. Run the following command to check the status of the integration server and verify that it is running:
    oc get integrationservers -n namespace

    You should also be able to view this integration server in your App Connect Dashboard instance.

    Note: If you are using a Kubernetes environment, ensure that you create an ingress definition after you create this instance, to make its internal service publicly available. For more information, see Creating ingress definitions for external access to your IBM App Connect instances.

Updating the custom resource settings for an instance

If you want to change the settings of an existing integration server, you can edit its custom resource settings by using the Red Hat OpenShift web console or CLI, or the CLI for a Kubernetes environment. For example, you might want to adjust CPU or memory requests or limits, or set custom environment variables for use within the containers in the deployment.

Restriction: You cannot update standard settings such as the kind of resource (kind), and the name and namespace (metadata.name and metadata.namespace), as well as some system-generated settings, or settings such as the storage type of certain components. An error message will be displayed when you try to save.

Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).


Updating an instance from the Red Hat OpenShift web console

To update an integration server by using the Red Hat OpenShift web console, complete the following steps:

  1. From a browser window, log in to the Red Hat OpenShift Container Platform web console. Ensure that you are in the Administrator perspective Administrator perspective of the web console.
  2. From the navigation, click Operators > Installed Operators.
  3. If required, select the namespace (project) in which you installed the IBM App Connect Operator.
  4. From the Installed Operators page, click IBM App Connect.
  5. From the Operator details page for the App Connect Operator, click the Integration Server tab.
  6. Locate and click the name of the integration server that you want to update.
  7. Click the YAML tab.
  8. Update the content of the YAML editor as required. You can update existing values, add new entries, or delete entries. For information about the available parameters and their values, see Custom resource values.
  9. Click Save to save your changes.

Updating an instance from the Red Hat OpenShift CLI or Kubernetes CLI

To update an integration server from the Red Hat OpenShift CLI, complete the following steps.

Note: These instructions relate to the Red Hat OpenShift CLI, but can be applied to a Kubernetes environment by using the relevant command to log in to the cluster, and substituting oc with kubectl where appropriate.
  1. From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
  2. From the namespace where the integration server is deployed, run the oc edit command to partially update the instance, where instanceName is the name (metadata.name value) of the instance.
    oc edit integrationserver instanceName

    The integration server CR automatically opens in the default text editor for your operating system.

  3. Update the contents of the file as required. You can update existing values, add new entries, or delete entries. For information about the available parameters and their values, see Custom resource values.
  4. Save the YAML definition and close the text editor to apply the changes.
Tip:

If preferred, you can also use the oc patch command to apply a patch with some bash shell features, or use oc apply with the appropriate YAML settings.

For example, you can save the YAML settings to a file with a .yaml extension (for example, updatesettings.yaml), and then run oc patch as follows to update the settings for an instance:

oc patch integrationserver instanceName --type='merge' --patch "$(cat updatesettings.yaml)"

Deleting an instance

If no longer required, you can delete an integration server by using the Red Hat OpenShift web console or CLI, or the CLI for a Kubernetes environment.

Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).

Deleting an instance from the Red Hat OpenShift web console

To delete an integration server by using the Red Hat OpenShift web console, complete the following steps:

  1. From a browser window, log in to the Red Hat OpenShift Container Platform web console. Ensure that you are in the Administrator perspective Administrator perspective of the web console.
  2. From the navigation, click Operators > Installed Operators.
  3. If required, select the namespace (project) in which you installed the IBM App Connect Operator.
  4. From the Installed Operators page, click IBM App Connect.
  5. From the Operator details page for the App Connect Operator, click the Integration Server tab.
  6. Locate the instance that you want to delete.
  7. Click the options icon (Options menu) to open the options menu, and then click the Delete option.
  8. Confirm the deletion.

Deleting an instance from the Red Hat OpenShift CLI or Kubernetes CLI

To delete an integration server by using the Red Hat OpenShift CLI, complete the following steps.

Note: These instructions relate to the Red Hat OpenShift CLI, but can be applied to a Kubernetes environment by using the relevant command to log in to the cluster, and substituting oc with kubectl where appropriate.
  1. From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
  2. From the namespace where the integration server instance is deployed, run the following command to delete the instance, where instanceName is the value of the metadata.name parameter.
    oc delete integrationserver instanceName

Custom resource values

The following table lists the configurable parameters and default values for the custom resource.

In the parameter names, the notation [] depicts an array. For example, the following notation indicates that an array of custom ports can be specified (for a service): spec.service.ports[].fieldName. When used together with spec.service.type, you can specify multiple port definitions as shown in the following example:
spec:
  service:
    ports:
      - name: config-abc
        nodePort: 32000
        port: 9910
        protocol: TCP
        targetPort: 9920
      - name: config-xyz
        nodePort: 31500
        port: 9376
        protocol: SCTP
        targetPort: 9999
    type: NodePort
Parameter Description Default

apiVersion

The API version that identifies which schema is used for this integration server.

appconnect.ibm.com/v1beta1

kind

The resource type.

IntegrationServer

metadata.name

A unique short name by which the integration server can be identified.

metadata.namespace

The namespace (project) in which the integration server is installed.

The namespace in which you create an instance or object must be no more than 40 characters in length.

spec.adminServerSecure

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

An indication of whether to enable TLS on the admin server port for use by the integration server administration REST API and for secure communication between the App Connect Dashboard and the integration server. The administration REST API can be used to create or report security credentials for an integration server or integration runtime.

Valid values are true and false.

When set to true (the default), HTTPS interactions are enabled between the Dashboard and integration server by using self-signed TLS certificates that are provided by a configuration object of type REST Admin SSL files (adminssl). This configuration object is automatically created from a ZIP archive, which contains a set of PEM files named ca.crt.pem, tls.crt.pem, and tls.key.pem. A secret is also auto generated to store the Base64-encoded content of this ZIP file. When you deploy the integration server, the configuration name is stored in spec.configurations as integrationServerName-is-adminssl, where integrationServerName is the metadata.name value for the integration server. For more information about this configuration object, see REST Admin SSL files type.

If set to false, the admin server port uses HTTP.

true

spec.affinity

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

Specify custom affinity settings that will control the placement of pods on nodes. The custom affinity settings that you specify will completely overwrite all of the default settings. (The current default settings are shown after this table.)

Custom settings are supported only for nodeAffinity. If you provide custom settings for nodeAntiAffinity, podAffinity, or podAntiAffinity, they will be ignored.

For more information about spec.affinity.nodeAffinity definitions, see Controlling pod placement on nodes using node affinity rules in the OpenShift documentation and Assign Pods to Nodes using Node Affinity in the Kubernetes documentation.

spec.annotations

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

Specify one or more custom annotations (as arbitrary metadata) to apply to each pod that is created during deployment. Specify each annotation as a key/value pair in the format key: value. For example:

spec:
  annotations:
    key1: value1
    key2: value2

For example, you can add a spec.annotations.restart value to trigger a rolling restart of your integration server pods, as described in Restarting integration server or integration runtime pods.

The custom annotations that you specify will be merged with the default (generated) annotations. If duplicate annotation keys are detected, the custom value will overwrite the default value.

spec.barURL

Identifies the location of one or more BAR files that can be deployed to the integration server. Can be either of these values:

  • The URL of the location where a (single) BAR file is stored in the content server of the App Connect Dashboard; for example:

    https://mydashboardname-dash:3443/v1/directories/CustomerDatabaseV1?b1234dca-5678-4eff-91d3-4d9876543211

    This URL is generated when a BAR file is uploaded to the App Connect Dashboard while deploying an integration server or imported from the BAR files page. If you want to use a previously uploaded (or imported) BAR file to deploy an integration server, you can display the URL in the Dashboard as described in Managing BAR files, and then set this URL as the value of spec.barURL.

  • Applicable only if spec.version resolves to 12.0.1.0-r1 or later: A comma-separated list of one or more co-related BAR files in an external repository such as Artifactory. Specify the URL to each file including the file name; for example:

    https://artifactory.com/myrepo/getHostnameAPI.bar,https://artifactory.com/myrepo/CustomerDatabaseV1.bar

    Tip: If you are using GitHub as an external repository, you must specify the raw URL in spec.barURL.

    The BAR files that you specify must all be accessible by using the same authentication credentials; for example, basic authentication or no authentication. You must define these credentials by creating a configuration of type BarAuth and then add the configuration name to spec.configurations. For information about the supported authentication credentials and details of how to create this configuration object, see BarAuth type (or BarAuth type if using the Dashboard) and Configuration reference.

If you want to use a custom server runtime image to deploy an integration server, use spec.pod.containers.runtime.image to specify this image.

spec.configurations[]

An array of existing configurations that you want to apply to one or more BAR files being deployed. These configurations must be in the same namespace as the integration server. To specify a configuration, use the metadata.name value that was specified while creating that configuration.

For information about creating configurations, see Configuration reference. To see examples of how to specify one or more values for spec.configurations, see Creating an instance from the Red Hat OpenShift web console and Creating an instance from the Red Hat OpenShift CLI or Kubernetes CLI.

spec.createDashboardUsers

Determines what type of web users (user IDs) are created on an integration server to allow the App Connect Dashboard to connect to it with the appropriate permissions.

Valid values are true and false.

  • Applicable if spec.version resolves to 12.0.1.0-r1 or later: Defaults to false, which automatically causes web users to run with admin permissions. This default is relevant if you intend to deploy BAR files from an external repository without using the Dashboard.

    Set this value to true if you want to use the Dashboard to deploy BAR files from the content server or an external repository and would like to ensure that web users are automatically created with admin permissions, read-only permissions, and no access.

  • Applicable if spec.version resolves to 11.0.0.12-r1 or earlier: Defaults to true, which automatically creates web users (with admin permissions, read-only permissions, and no access) when the deployed BAR file is stored in the content server in the App Connect Dashboard (where spec.barURL) identifies the generated URL in the content server).

false (Applicable for 12.0.1.0-r1 or later)

true (Applicable for 11.0.0.12-r1 or earlier)

spec.defaultAppName

A name for the default application for the deployment of independent resources.

DefaultApplication

spec.designerFlowsOperationMode

Indicate whether to create a Toolkit integration or a Designer integration.
  • disabled: Specify this value if you are deploying a Toolkit integration.
  • local: Specify this value to enable the use of locally deployed connectors for running integrations in the integration server. Applicable for a Designer integration only.

disabled

spec.designerFlowsType

Indicate the type of flow if creating a Designer integration.
  • api-flows: Specify this value to enable support for API flows only.
  • event-driven-or-api-flows: Specify this value to enable support for both event-driven and API flows. You must select this option if you are deploying an event-driven flow, but it can also be used if deploying an API flow. Note that the resource requirements are higher for this option.

This parameter is applicable for a Designer integration only, so if spec.designerFlowsOperationMode is set to disabled, you must omit spec.designerFlowsType.

This parameter is supported only for certain versions. See spec.version for details.

 

spec.disableRoutes

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

(Not applicable in a Kubernetes environment; will be ignored)

Indicate whether to disable the automatic creation of routes, which externally expose a service that identifies the set of integration server pods.

Valid values are true and false. Set this value to true to disable the automatic creation of external HTTP and HTTPS routes.

false

spec.enableMetrics

(Only applicable if spec.version resolves to 11.0.0.12-r1 or later)

Indicate whether to enable the automatic emission of metrics.

Valid values are true and false. Set this value to false to stop metrics from being emitted by default.

true

spec.env

(Only applicable if spec.version resolves to 12.0.1.0-r1 or later)

Define custom environment variables that will be set and used within the App Connect containers in the deployment. For example, you can set a container's timezone by declaring a TZ environment variable with a value that is set to a valid TZ identifier (such as Africa/Abidjan).

This parameter exposes the Kubernetes API for declaring environment variables in the container, and as such follows the same schema. Example:

spec:
  env:
  - name: MY_CUSTOM_ENV_VAR
    value: "100"

For more information, see Define Environment Variables for a Container in the Kubernetes documentation.

spec.forceFlowHTTPS.enabled

(Only applicable if spec.version resolves to 12.0.1.0-r4 or later)

Indicate whether to force all HTTP Input nodes and SOAP Input nodes in all deployed flows (including their usage for inbound connections to applications, REST APIs, and integration services) in the integration server to use Transport Layer Security (TLS).

Valid values are true and false.

When spec.forceFlowHTTPS.enabled is set to true, you must also ensure that the protocol in spec.service.endpointType is set to https.

false

spec.forceFlowHTTPS.secretName

(Only applicable if spec.version resolves to 12.0.1.0-r4 or later)

Specify the name of a secret that stores a user-supplied public certificate/private key pair to use for enforcing TLS. (You can use tools such as keytool or OpenSSL to generate the certificate and key if required, but do not need to apply password protection.)

This secret is required if spec.forceFlowHTTPS.enabled is set to true.

You must create the secret in the namespace where the integration server will be deployed, and can do so from the Red Hat OpenShift web console, or from the Red Hat OpenShift or Kubernetes CLI. Use your preferred method to create the secret. For example, you can use the following Secret (YAML) resource to create the secret from the web console (by using the Import YAML icon Import YAML icon) or from the CLI (by running oc apply -f resourceFile.yaml):

apiVersion: v1
kind: Secret
metadata:
  name: secretName
  namespace: namespaceName
data:
  tls.crt: "base64Encoded_crt_publicCertificate"
  tls.key: "base64Encoded_key_privateKey"
type: kubernetes.io/tls
Or you can create the secret by running the following command:
oc create secret tls secretName --key filename.key --cert filename.crt
Note:

When you create the integration server, the IBM App Connect Operator checks for the certificate and key in the secret and adds them to a generated keystore that is protected with a password. The endpoint of the deployed integration is then secured with this certificate and key. If the secret can't be found in the namespace, the integration server will fail after 10 minutes.

If you need to update the certificate and key that are stored in the secret, you can edit the Secret resource to update the tls.crt and tls.key values. When you save, the keystore is regenerated and used by the integration server without the need for a restart.

spec.labels

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

Specify one or more custom labels (as classification metadata) to apply to each pod that is created during deployment. Specify each label as a key/value pair in the format labelKey: labelValue. For example:

spec:
  labels:
    key1: value1
    key2: value2

The custom labels that you specify will be merged with the default (generated) labels. If duplicate label keys are detected, the custom value will overwrite the default value.

spec.license.accept

An indication of whether the license should be accepted.

Valid values are true and false. To install, this value must be set to true.

false

spec.license.license

See Licensing reference for IBM App Connect Operator for the valid values.

spec.license.use

See Licensing reference for IBM App Connect Operator for the valid values.

spec.logFormat

The format used for the container logs that are output to the container's console.

Valid values are basic and json.

basic

spec.pod.containers.connectors.livenessProbe.failureThreshold

(Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier)

The number of times the liveness probe (which checks whether the container is still running) can fail before taking action.

1

spec.pod.containers.connectors.livenessProbe.initialDelaySeconds

(Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier)

How long to wait (in seconds) before starting the liveness probe, which checks whether the container is still running. Increase this value if your system cannot start the container in the default time period.

360

spec.pod.containers.connectors.livenessProbe.periodSeconds

(Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier)

How often (in seconds) to perform the liveness probe that checks whether the container is still running.

10

spec.pod.containers.connectors.livenessProbe.timeoutSeconds

(Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier)

How long (in seconds) before the liveness probe (which checks whether the container is still running) times out.

5

spec.pod.containers.connectors.readinessProbe.failureThreshold

(Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier)

The number of times the readiness probe (which checks whether the container is ready) can fail before taking action.

1

spec.pod.containers.connectors.readinessProbe.initialDelaySeconds

(Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier)

How long to wait (in seconds) before starting the readiness probe, which checks whether the container is ready.

10

spec.pod.containers.connectors.readinessProbe.periodSeconds

(Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier)

How often (in seconds) to perform the readiness probe that checks whether the container is ready.

5

spec.pod.containers.connectors.readinessProbe.timeoutSeconds

(Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier)

How long (in seconds) before the readiness probe (which checks whether the container is ready) times out.

3

spec.pod.containers.connectors.resources.limits.cpu

(Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier)

The upper limit of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

1

spec.pod.containers.connectors.resources.limits.memory

(Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier)

The memory upper limit (in bytes) that is allocated for running the container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

512Mi

spec.pod.containers.connectors.resources.requests.cpu

(Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier)

The minimum number of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

250m

spec.pod.containers.connectors.resources.requests.memory

(Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier)

The minimum memory (in bytes) that is allocated for running the container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

256Mi

spec.pod.containers.designereventflows.livenessProbe.failureThreshold

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

The number of times the liveness probe (which checks whether the container is still running) can fail before taking action.

1

spec.pod.containers.designereventflows.livenessProbe.initialDelaySeconds

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

How long to wait (in seconds) before starting the liveness probe, which checks whether the container is still running. Increase this value if your system cannot start the container in the default time period.

360

spec.pod.containers.designereventflows.livenessProbe.periodSeconds

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

How often (in seconds) to perform the liveness probe that checks whether the container is still running.

10

spec.pod.containers.designereventflows.livenessProbe.timeoutSeconds

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

How long (in seconds) before the liveness probe (which checks whether the container is still running) times out.

5

spec.pod.containers.designereventflows.readinessProbe.failureThreshold

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

The number of times the readiness probe (which checks whether the container is ready) can fail before taking action.

1

spec.pod.containers.designereventflows.readinessProbe.initialDelaySeconds

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

How long to wait (in seconds) before starting the readiness probe, which checks whether the container is ready.

10

spec.pod.containers.designereventflows.readinessProbe.periodSeconds

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

How often (in seconds) to perform the readiness probe that checks whether the container is ready.

5

spec.pod.containers.designereventflows.readinessProbe.timeoutSeconds

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

How long (in seconds) before the readiness probe (which checks whether the container is ready) times out.

4

spec.pod.containers.designereventflows.resources.limits.cpu

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

The upper limit of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

1

spec.pod.containers.designereventflows.resources.limits.memory

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

The memory upper limit (in bytes) that is allocated for running the container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

750Mi

spec.pod.containers.designereventflows.resources.requests.cpu

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

The minimum number of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

250m

spec.pod.containers.designereventflows.resources.requests.memory

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

The minimum memory (in bytes) that is allocated for running the container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

400Mi

spec.pod.containers.designerflows.livenessProbe.failureThreshold

The number of times the liveness probe (which checks whether the container is still running) can fail before taking action.

3

1 (11.0.0.10-r2 or earlier)

spec.pod.containers.designerflows.livenessProbe.initialDelaySeconds

How long to wait (in seconds) before starting the liveness probe, which checks whether the container is still running. Increase this value if your system cannot start the container in the default time period.

360

spec.pod.containers.designerflows.livenessProbe.periodSeconds

How often (in seconds) to perform the liveness probe that checks whether the container is still running.

10

spec.pod.containers.designerflows.livenessProbe.timeoutSeconds

How long (in seconds) before the liveness probe (which checks whether the container is still running) times out.

30

5 (11.0.0.10-r2 or earlier)

spec.pod.containers.designerflows.readinessProbe.failureThreshold

The number of times the readiness probe (which checks whether the container is ready) can fail before taking action.

3

1 (11.0.0.10-r2 or earlier)

spec.pod.containers.designerflows.readinessProbe.initialDelaySeconds

How long to wait (in seconds) before starting the readiness probe, which checks whether the container is ready.

180

10 (11.0.0.10-r2 or earlier)

spec.pod.containers.designerflows.readinessProbe.periodSeconds

How often (in seconds) to perform the readiness probe that checks whether the container is ready.

10

5 (11.0.0.10-r2 or earlier)

spec.pod.containers.designerflows.readinessProbe.timeoutSeconds

How long (in seconds) before the readiness probe (which checks whether the container is ready) times out.

30

3

(11.0.0.10-r2 or earlier)

spec.pod.containers.designerflows.resources.limits.cpu

The upper limit of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

1

spec.pod.containers.designerflows.resources.limits.memory

The memory upper limit (in bytes) that is allocated for running the container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

512Mi

spec.pod.containers.designerflows.resources.requests.cpu

The minimum number of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

250m

spec.pod.containers.designerflows.resources.requests.memory

The minimum memory (in bytes) that is allocated for running the container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

256Mi

spec.pod.containers.proxy.livenessProbe.failureThreshold

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

The number of times the liveness probe (which checks whether the runtime container is still running) can fail before taking action.

1

spec.pod.containers.proxy.livenessProbe.initialDelaySeconds

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

How long to wait (in seconds) before starting the liveness probe, which checks whether the runtime container is still running. Increase this value if your system cannot start the container in the default time period.

60

spec.pod.containers.proxy.livenessProbe.periodSeconds

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

How often (in seconds) to perform the liveness probe that checks whether the runtime container is still running.

5

spec.pod.containers.proxy.livenessProbe.timeoutSeconds

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

How long (in seconds) before the liveness probe (which checks whether the runtime container is still running) times out.

3

spec.pod.containers.proxy.readinessProbe.failureThreshold

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

The number of times the readiness probe (which checks whether the runtime container is ready) can fail before taking action.

1

spec.pod.containers.proxy.readinessProbe.initialDelaySeconds

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

How long to wait (in seconds) before starting the readiness probe, which checks whether the runtime container is ready.

5

spec.pod.containers.proxy.readinessProbe.periodSeconds

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

How often (in seconds) to perform the readiness probe that checks whether the runtime container is ready.

5

spec.pod.containers.proxy.readinessProbe.timeoutSeconds

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

How long (in seconds) before the readiness probe (which checks whether the runtime container is ready) times out.

3

spec.pod.containers.proxy.resources.limits.cpu

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

The upper limit of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

1

spec.pod.containers.proxy.resources.limits.memory

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

The memory upper limit (in bytes) that is allocated for running the container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

512Mi

spec.pod.containers.proxy.resources.requests.cpu

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

The minimum number of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

250m

spec.pod.containers.proxy.resources.requests.memory

(Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later)

The minimum memory (in bytes) that is allocated for running the container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

256Mi

spec.pod.containers.runtime.image

The name of the custom server runtime image to use; for example, image-registry.openshift-image-registry.svc:5000/imageName. This image must be built from the version that is specified as the apiVersion value. Channels are not supported when custom images are used.

spec.pod.containers.runtime.imagePullPolicy

Indicate whether you want images to be pulled every time, never, or only if they're not present.

Valid values are Always, Never, and IfNotPresent.

IfNotPresent

spec.pod.containers.runtime.lifecycle.postStart.exec.command[]

(Only applicable if spec.version resolves to 12.0.4.0-r2 or later)

An array of (one or more) commands to execute immediately after the runtime container is created (or started).

The working directory for the command is the root ('/') in the container's file system. The command executes without being run in a shell, which means that traditional shell instructions ('|', etc) will not work. To use a shell, explicitly call out to that shell. An exit status of 0 (zero) indicates a live or healthy status, and a non-zero value indicates an unhealthy status.

For more information, see Container Lifecycle Hooks and Attach Handlers to Container Lifecycle Events in the Kubernetes documentation.

spec.pod.containers.runtime.lifecycle.postStart.httpGet.host

(Only applicable if spec.version resolves to 12.0.4.0-r2 or later)

Specify the host name to connect to, to perform the HTTP request on the runtime container pod immediately after it starts. Defaults to the pod IP. You can alternatively set Host in spec.pod.containers.runtime.lifecycle.postStart.httpGet.httpHeaders.

spec.pod.containers.runtime.lifecycle.postStart.httpGet.httpHeaders

(Only applicable if spec.version resolves to 12.0.4.0-r2 or later)

Specify one or more custom headers to set in the HTTP request to be performed on the runtime container pod immediately after it starts. For each header, specify a header field name and header field value in the following format:

spec:
  pod:
    containers:
      runtime:
        lifecycle:
          postStart:
            httpGet:
              httpHeaders:
                - name: headerFieldName1
                  value: headerFieldValue1
                - name: headerFieldName2
                  value: headerFieldValue3

spec.pod.containers.runtime.lifecycle.postStart.httpGet.path

(Only applicable if spec.version resolves to 12.0.4.0-r2 or later)

Specify the path to access on the HTTP server when performing the HTTP request on the runtime container pod immediately after it starts.

spec.pod.containers.runtime.lifecycle.postStart.httpGet.scheme

(Only applicable if spec.version resolves to 12.0.4.0-r2 or later)

Specify the scheme to use for connecting to the host when performing the HTTP request on the runtime container pod immediately after it starts.

HTTP

spec.pod.containers.runtime.lifecycle.preStop.exec.command[]

(Only applicable if spec.version resolves to 12.0.4.0-r2 or later)

An array of (one or more) commands to execute inside the runtime container before its pod is terminated.

Use the spec.pod.containers.runtime.lifecycle.preStop.* settings to configure the lifecycle of the runtime container to allow existing transactions to complete before the pod is terminated due to an API request or a management event (such as failure of a liveness or startup probe, or preemption). This allows rolling updates to occur without breaking transactions (unless they are long running). The countdown for the pod's termination grace period begins before the preStop hook is executed.

The working directory for the command is the root ('/') in the container's file system. The command executes without being run in a shell, which means that traditional shell instructions ('|', etc) will not work. To use a shell, explicitly call out to that shell. An exit status of 0 (zero) indicates a live or healthy status, and a non-zero value indicates an unhealthy status.

For example, this default preStop setting ensures that rolling updates do not result in lost messages on the runtime:
spec:
  pod:
    containers:
      runtime:
        lifecycle:
          preStop:
            exec:
              command:
                - sh
                - -c
                - "sleep 5"

For more information, see Container Lifecycle Hooks and Attach Handlers to Container Lifecycle Events in the Kubernetes documentation.

spec.pod.containers.runtime.lifecycle.preStop.httpGet.host

(Only applicable if spec.version resolves to 12.0.4.0-r2 or later)

Specify the host name to connect to, to perform the HTTP request on the runtime container pod before its termination. Defaults to the pod IP. You can alternatively set Host in spec.pod.containers.runtime.lifecycle.preStop.httpGet.httpHeaders.

spec.pod.containers.runtime.lifecycle.preStop.httpGet.httpHeaders

(Only applicable if spec.version resolves to 12.0.4.0-r2 or later)

Specify one or more custom headers to set in the HTTP request to be performed on the runtime container pod before its termination. For each header, specify a header field name and header field value in the following format:

spec:
  pod:
    containers:
      runtime:
        lifecycle:
          preStop:
            httpGet:
              httpHeaders:
                - name: headerFieldName1
                  value: headerFieldValue1
                - name: headerFieldName2
                  value: headerFieldValue3

spec.pod.containers.runtime.lifecycle.preStop.httpGet.path

(Only applicable if spec.version resolves to 12.0.4.0-r2 or later)

Specify the path to access on the HTTP server when performing the HTTP request on the runtime container pod before its termination.

spec.pod.containers.runtime.lifecycle.preStop.httpGet.scheme

(Only applicable if spec.version resolves to 12.0.4.0-r2 or later)

Specify the scheme to use for connecting to the host when performing the HTTP request on the runtime container pod before its termination.

HTTP

spec.pod.containers.runtime.livenessProbe.failureThreshold

The number of times the liveness probe can fail before taking an action to restart the container. (The liveness probe checks whether the runtime container is still running or needs to be restarted.)

1

spec.pod.containers.runtime.livenessProbe.initialDelaySeconds

How long to wait (in seconds) before performing the first probe to check whether the runtime container is still running. Increase this value if your system cannot start the container in the default time period.

360

spec.pod.containers.runtime.livenessProbe.periodSeconds

How often (in seconds) to perform a liveness probe to check whether the runtime container is still running.

10

spec.pod.containers.runtime.livenessProbe.timeoutSeconds

How long (in seconds) before the liveness probe (which checks whether the runtime container is still running) times out.

5

spec.pod.containers.runtime.readinessProbe.failureThreshold

The number of times the readiness probe can fail before taking an action to mark the pod as Unready. (The readiness probe checks whether the runtime container is ready to accept traffic.)

1

spec.pod.containers.runtime.readinessProbe.initialDelaySeconds

How long to wait (in seconds) before performing the first probe to check whether the runtime container is ready.

10

spec.pod.containers.runtime.readinessProbe.periodSeconds

How often (in seconds) to perform a readiness probe to check whether the runtime container is ready.

5

spec.pod.containers.runtime.readinessProbe.timeoutSeconds

How long (in seconds) before the readiness probe (which checks whether the runtime container is ready) times out.

3

spec.pod.containers.runtime.resources.limits.cpu

The upper limit of CPU cores that are allocated for running the runtime container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

1

spec.pod.containers.runtime.resources.limits.memory

The memory upper limit (in bytes) that is allocated for running the runtime container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

512Mi

spec.pod.containers.runtime.resources.requests.cpu

The minimum number of CPU cores that are allocated for running the runtime container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

250m

spec.pod.containers.runtime.resources.requests.memory

The minimum memory (in bytes) that is allocated for running the runtime container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

256Mi

spec.pod.containers.runtime.startupProbe.failureThreshold

(Only applicable if spec.version resolves to 12.0.3.0-r2 or later)

The number of times the startup probe can fail before taking action. (The startup probe checks whether the application in the runtime container has started. Liveness and readiness checks are disabled until the startup probe has succeeded.)

Note: If using startup probes, ensure that spec.pod.containers.connectors.livenessProbe.initialDelaySeconds and spec.pod.containers.connectors.readinessProbe.initialDelaySeconds are unset.

For more information about startup probes, see Protect slow starting containers with startup probes in the Kubernetes documentation.

120

spec.pod.containers.runtime.startupProbe.initialDelaySeconds

(Only applicable if spec.version resolves to 12.0.3.0-r2 or later)

How long to wait (in seconds) before performing the first probe to check whether the runtime application has started. Increase this value if your system cannot start the application in the default time period.

0

spec.pod.containers.runtime.startupProbe.periodSeconds

(Only applicable if spec.version resolves to 12.0.3.0-r2 or later)

How often (in seconds) to perform a startup probe to check whether the runtime application has started.

5

spec.pod.containers.runtime.volumeMounts

(Only applicable if spec.version resolves to 11.0.0.11-r2 or later)

Details of where to mount for one or more named volumes, into the runtime container. Use with spec.pod.volumes.

Follows the Volume Mount specification at https://pkg.go.dev/k8s.io/api@v0.20.3/core/v1#VolumeMount. For more information, see Volumes in the Kubernetes documentation.

The following volume mounts are blocked:
  • /home/aceuser/ excluding /home/aceuser/ace-server/log
  • /opt/ibm/ace-11/*   (Applicable to 11.0.0.11-r2 and 11.0.0.12-r1)
  • /opt/ibm/ace-12/*   (Applicable to 12.0.1.0-r1 or later)

Specify custom settings for your volume mounts. For an example of these settings, see the Example: Enabling custom volume mounts section that is shown after this table.

 

spec.pod.containers.tracingagent.livenessProbe.failureThreshold

(Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier)

The number of times the liveness probe (which checks whether the container is still running) can fail before taking action.

1

spec.pod.containers.tracingagent.livenessProbe.initialDelaySeconds

(Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier)

How long to wait (in seconds) before starting the liveness probe, which checks whether the container is still running. Increase this value if your system cannot start the container in the default time period.

360

spec.pod.containers.tracingagent.livenessProbe.periodSeconds

(Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier)

How often (in seconds) to perform the liveness probe that checks whether the container is still running.

10

spec.pod.containers.tracingagent.livenessProbe.timeoutSeconds

(Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier)

How long (in seconds) before the liveness probe (which checks whether the container is still running) times out.

5

spec.pod.containers.tracingagent.readinessProbe.failureThreshold

(Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier)

The number of times the readiness probe (which checks whether the container is ready) can fail before taking action.

1

spec.pod.containers.tracingagent.readinessProbe.initialDelaySeconds

(Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier)

How long to wait (in seconds) before starting the readiness probe, which checks whether the container is ready.

10

spec.pod.containers.tracingagent.readinessProbe.periodSeconds

(Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier)

How often (in seconds) to perform the readiness probe that checks whether the container is ready.

5

spec.pod.containers.tracingagent.readinessProbe.timeoutSeconds

(Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier)

How long (in seconds) before the readiness probe (which checks whether the container is ready) times out.

3

spec.pod.containers.tracingcollector.livenessProbe.failureThreshold

(Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier)

The number of times the liveness probe (which checks whether the container is still running) can fail before taking action.

1

spec.pod.containers.tracingcollector.livenessProbe.initialDelaySeconds

(Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier)

How long to wait (in seconds) before starting the liveness probe, which checks whether the container is still running. Increase this value if your system cannot start the container in the default time period.

360

spec.pod.containers.tracingcollector.livenessProbe.periodSeconds

(Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier)

How often (in seconds) to perform the liveness probe that checks whether the container is still running.

10

spec.pod.containers.tracingcollector.livenessProbe.timeoutSeconds

(Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier)

How long (in seconds) before the liveness probe (which checks whether the container is still running) times out.

5

spec.pod.containers.tracingcollector.readinessProbe.failureThreshold

(Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier)

The number of times the readiness probe (which checks whether the container is ready) can fail before taking action.

1

spec.pod.containers.tracingcollector.readinessProbe.initialDelaySeconds

(Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier)

How long to wait (in seconds) before starting the readiness probe, which checks whether the container is ready.

10

spec.pod.containers.tracingcollector.readinessProbe.periodSeconds

(Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier)

How often (in seconds) to perform the readiness probe that checks whether the container is ready.

5

spec.pod.containers.tracingcollector.readinessProbe.timeoutSeconds

(Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier)

How long (in seconds) before the readiness probe (which checks whether the container is ready) times out.

3

spec.pod.hostAliases.hostnames[]

(Only applicable if spec.version resolves to 11.0.0.12-r1 or later)

One or more hostnames that you want to map to an IP address (as a host alias), to facilitate host name resolution. Use with spec.pod.hostAliases.ip.

Each host alias is added as an entry to a pod's /etc/hosts file.

Example settings:

spec:
  pod:
    hostAliases:
      - hostnames:
          - somehostname.com
          - anotherhostname.com
        ip: 192.0.2.0

For more information about host aliases and the hosts file, see the Kubernetes documentation.

spec.pod.hostAliases.ip

(Only applicable if spec.version resolves to 11.0.0.12-r1 or later)

An IP address that you want to map to one or more hostnames (as a host alias), to facilitate host name resolution. Use with spec.pod.hostAliases.hostnames[].

Each host alias is added as an entry to a pod's /etc/hosts file.

spec.pod.imagePullSecrets.name

The secret used for pulling images.

spec.pod.priority

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

Pod priority settings control which pods get killed, rescheduled, or started to allow the most important pods to keep running.

spec.pod.priority specifies an integer value, which various system components use to identify the priority of the integration server pod. The higher the value, the higher the priority.

If the priority admission controller is enabled, you cannot manually specify a priority value because the admission controller automatically uses the spec.pod.priorityClassName setting to populate this field with a resolved value.

spec.pod.priorityClassName

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

A priority class name that maps to the integer value of a pod priority in spec.pod.priority. If specified, this class name indicates the pod's priority (or importance) relative to other pods.

Valid values are as follows:

  • Specify either of these built-in Kubernetes classes, which mark a pod as critical and indicate the highest priorities: system-node-critical or system-cluster-critical. system-node-critical is the highest available priority.
  • To specify any other priority class, create a PriorityClass object with that class name. For more information, see Pod Priority and Preemption in the Kubernetes documentation.

If you do not specify a class name, the priority is set as follows:

  • The class name of an identified PriorityClass object, which has its globalDefault field set to true, is assigned.
  • If there is no PriorityClass object with a globalDefault setting of true, the priority of the pod is set to 0 (zero).

spec.pod.tolerations[].effect

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

To prevent pods from being scheduled onto inappropriate nodes, use taints together with tolerations. Tolerations allow scheduling, but don't guarantee scheduling because the scheduler also evaluates other parameters as part of its function. Apply one or more taints to a node (by running oc taint with a key, value, and taint effect) to indicate that the node should repel any pods that do not tolerate the taints. Then, apply toleration settings (effect, key, operator, toleration period, and value) to a pod to allow it to be scheduled on the node if the pod's toleration matches the node's taint. For more information, see Taints and Tolerations in the Kubernetes documentation.

If you need to specify one or more tolerations for a pod, you can use the collection of spec.pod.tolerations[].* parameters to define an array.

For spec.pod.tolerations[].effect, specify the taint effect that the toleration should match. (The taint effect on a node determines how that node reacts to a pod that is not configured with appropriate tolerations.) Leave the effect empty to match all taint effects. Alternatively, specify one of these values: NoSchedule, PreferNoSchedule, or NoExecute.

spec.pod.tolerations[].key

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

Specify the taint key that the toleration applies to. Leave the key empty and set spec.pod.tolerations[].operator to Exists to match all taint keys, values, and effects.

spec.pod.tolerations[].operator

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

Specify an operator that represents a key's relationship to the value in spec.pod.tolerations[].value. Valid operators are Exists and Equal. Exists is equivalent to a wildcard for the toleration value, and indicates that the pod can tolerate all taints of a particular category.

Equal

spec.pod.tolerations[].tolerationSeconds

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

Optionally specify a period of time in seconds that determines how long the pod stays bound to a node with a matching taint before being evicted. Applicable only for a toleration with a NoExecute effect (which indicates that a pod without the appropriate toleration should immediately be evicted from the node).

By default, no value is set, which means that a pod that tolerates the taint will never be evicted. Zero and negative values are treated as 0 (evict immediately).

spec.pod.tolerations[].value

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

Specify the taint value that the toleration matches to. If the operator is Exists, leave this value empty.

spec.pod.volumes

(Only applicable if spec.version resolves to 11.0.0.11-r2 or later)

Details of one or more named volumes that can be provided to the pod, to use for persisting data. Each volume must be configured with the appropriate permissions to allow the integration server to read or write to it as required.

Follows the Volume specification at https://pkg.go.dev/k8s.io/api/core/v1#VolumeMount. For more information, see Volumes in the Kubernetes documentation.

Specify custom settings for your volume types. For an example of these settings, see the Example: Enabling custom volume mounts section that is shown after this table.

Use with spec.pod.containers.runtime.volumeMounts.

 

spec.replicas

The number of replica pods to run for each deployment.

Increasing the number of replicas will proportionally increase the resource requirements.

1

spec.router.https.host

(Only applicable if spec.version resolves to 12.0.1.0-r4 or later)

OpenShift-only content An alias/DNS that can optionally be used to point to the endpoint for HTTPS flows in the integration server. This value must conform to the DNS 952 subdomain conventions. If unspecified, a hostname is automatically generated for the HTTPS route. This parameter can optionally be used with spec.router.https.labels.

After a route is created, the spec.router.https.host setting cannot be changed.

Tip: If required, you can define a second route (with labels) on the integration server by using the spec.router.https2.host and spec.router.https2.labels parameters.
Note:

On OpenShift, routers will typically use the oldest route with a given host when resolving conflicts.

If spec.disableRoutes is set to true, you cannot specify values for spec.router.https or spec.router.https2.

 

spec.router.https.labels

(Only applicable if spec.version resolves to 12.0.1.0-r4 or later)

OpenShift-only content Specify one or more custom labels (as arbitrary metadata) to apply to the integration server's HTTPS flows route. Specify each label as a key/value pair in the format labelKey: labelValue. This parameter can optionally be used with spec.router.https.host.

The labels are applied when a route is created. If the labels are removed, the standard set of labels will be used.

 

spec.router.https2.host

(Only applicable if spec.version resolves to 12.0.1.0-r4 or later)

OpenShift-only content An alias/DNS that can optionally be used to point to an endpoint for HTTPS flows in the integration server. When set, a second route is generated as a duplicate of the HTTPS route. The name of this new route is the same as the default HTTPS route with -2 appended.

This value must conform to the DNS 952 subdomain conventions. If unspecified, a hostname is automatically generated for the HTTPS route. This parameter can optionally be used with spec.router.https2.labels.

After a route is created, the spec.router.https2.host setting cannot be changed. Deleting spec.router.https2.host from the CR will cause the route to be deleted.

Note:

On OpenShift, routers will typically use the oldest route with a given host when resolving conflicts.

If spec.disableRoutes is set to true, you cannot specify values for spec.router.https or spec.router.https2.

 

spec.router.https2.labels

(Only applicable if spec.version resolves to 12.0.1.0-r4 or later)

OpenShift-only content Specify one or more custom labels (as arbitrary metadata) to apply to a second route for the integration server's HTTPS flows. Specify each label as a key/value pair in the format labelKey: labelValue. This parameter can optionally be used with spec.router.https.host.

The labels are applied when a route is created. If the labels are removed, the standard set of labels will be used.

 

spec.router.timeout

OpenShift-only content The timeout value (in seconds) on the OpenShift routes.

30s

spec.router.webhook.host

(Only applicable if spec.version resolves to 12.0.7.0-r1 or later)

OpenShift-only content Applicable for a Designer integration, which runs an event-driven flow in which a Webhook callback URL is supplied for the event that triggers the flow. Specify a route to expose the callback URL externally.

This parameter can optionally be used with spec.router.webhook.labels.

spec.router.webhook.labels

(Only applicable if spec.version resolves to 12.0.7.0-r1 or later)

OpenShift-only content Specify one or more custom labels (as arbitrary metadata) to apply to the integration server's Webhook callback route. Specify each label as a key/value pair in the format labelKey: labelValue. This parameter can optionally be used with spec.router.webhook.host.

The labels are applied when a route is created.

spec.service.endpointType

(Not applicable in a Kubernetes environment; will be ignored)

Specify a transport protocol that defines whether the endpoint of the deployed integration is secured.

Valid values are as follows:

  • http: Choose this option if you are not using HTTPS-based REST API flows. When set to http, the endpoint is configured as not secured with the http protocol. The http option uses port 7800 by default.
  • https: Choose this option to indicate that you are using HTTPS-based REST API flows (with TLS configured). To use this option, you must have configured all HTTP Input nodes and SOAP Input nodes in all flows in the integration to use TLS either by setting spec.forceFlowHTTPS.enabled to true or by using mechanisms such as the server.conf.yaml file while developing the flows in IBM App Connect Enterprise. When set to https (with the prerequisite TLS configuration), the endpoint for the deployed integration is configured as secured with the https protocol. The https option uses port 7843 by default.

http

spec.service.ports[].name

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

The name of a port definition on the service (defined by spec.service.type), which is created for accessing the set of pods. The name must contain only lowercase alphanumeric characters and a hyphen (-), and begin and end with an alphanumeric character.

If you need to expose more than one port for the service, you can use the collection of spec.service.ports[].fieldName parameters to configure multiple port definitions as an array.

spec.service.ports[].nodePort

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

The port on which each node listens for incoming requests for the service. Applicable when spec.service.type is set to NodePort to expose the service externally.

The port number must be in the range 30000 to 32767.

Ensure that this port is not being used by another service. You can check which node ports are already in use by running the following command and then checking under the PORT(S) column in the output:

oc get svc -n namespaceName

spec.service.ports[].port

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

The port that exposes the service to pods within the cluster.

spec.service.ports[].protocol

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

The protocol of the port. Valid values are TCP, SCTP, and UDP.

spec.service.ports[].targetPort

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

The port on which the pods will listen for connections from the service.

The port number must be in the range 1 to 65535.

spec.service.type

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

The type of service to create for accessing the set of pods:

  • ClusterIP: Specify this value to expose the service internally, for access by applications inside the cluster. This is the default.
  • NodePort: Specify this value to expose the service at a static port (specified by using spec.service.ports[].nodePort), for external traffic. If you want to use NodePort, you must work with your cluster administrator to configure external access to the cluster. For more information, see Configuring ingress cluster traffic using a NodePort in the OpenShift documentation.
    • spec:
        service:
          ports:
            - name: config-abc
              nodePort: 32000
              port: 9910
              protocol: TCP
              targetPort: 9920
          type: NodePort
    Note:
    When you set the service type to NodePort, the existing default ports of 7800, 7843, and 7600 (which are used for the http and https transports, and the administration REST API) are automatically assigned NodePort values on the service. If you try to manually specify node ports for these default ports by adding a spec.service.ports[].* array in the integration server CR, you will obtain an error. To set node ports for the default ports, all you need to do is set spec.service.type to NodePort and then omit the spec.service.ports[].* section, as shown in the following example.
    spec:
      service:
        type: NodePort
    To identify the NodePort values that are automatically assigned to the default ports, you can run the oc describe service or oc get service command. In the following example, integrationServerName-is represents the service name, where integrationServerName is the metadata.name value for the integration server.
    oc describe service integrationServerName-is

ClusterIP

spec.tracing.enabled

(Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier)

An indication of whether to enable transaction tracing, which will push trace data to the IBM Cloud Pak for Integration Operations Dashboard to aid with problem investigation and troubleshooting. An Operations Dashboard (Integration tracing) instance must be available to process the required registration approval for tracing.

Valid values are true and false.

Note: Support for the Operations Dashboard is restricted to these versions, where it is also deprecated:
  • IBM Cloud Pak for Integration 2022.4.1 or earlier
  • Integration servers at version 12.0.8.0-r1 or earlier

If you want to implement tracing, you can enable user or service trace for the integration server as described in Trace reference. You can also configure OpenTelemetry tracing, although support is available only for integration runtimes. For more information, see Configuring OpenTelemetry tracing for integration runtimes.

true

spec.tracing.namespace

(Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier)

The namespace where the Operations Dashboard (Integration tracing) was deployed. Applicable only if spec.tracing.enabled is set to true.

spec.version

The product version that the integration server is based on. Can be specified by using a channel or as a fully qualified version. If you specify a channel, you must ensure that the license aligns with the latest fully qualified version in the channel.

If you are using IBM App Connect Operator 7.1.0 or later, the supported channels or versions will depend on the Red Hat OpenShift version that is installed in your cluster.

To view the available values that you can choose from and the licensing requirements, see spec.version values and Licensing reference for IBM App Connect Operator.

If you specify a fully qualified version of 11.0.0.10-r2 or earlier, or specify a channel that resolves to 11.0.0.10-r2 or earlier, you must omit spec.designerFlowsType because it is not supported in those versions.

12.0

Default affinity settings

The default settings for spec.affinity are as follows. Note that the labelSelector entries are automatically generated.

You can overwrite the default settings for spec.affinity.nodeAffinity with custom settings, but attempts to overwrite the default settings for spec.affinity.podAntiAffinity will be ignored.
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/arch
            operator: In
            values:
            - amd64
            - s390x
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchLabels:
              <copy of the pod labels>
          topologyKey: kubernetes.io/hostname
        weight: 100
Example: Enabling custom volume mounts

The following example illustrates how to add an empty directory (as a volume) to the /cache folder in an integration server's pod:

spec:
  pod:
    containers:
      runtime:
        volumeMounts:
        - mountPath: /cache
          name: cache-volume
    volumes:
    - name: cache-volume
      emptyDir: {}

Load balancing

When you deploy an integration server, routes are created by default in Red Hat OpenShift to externally expose a service that identifies the set of pods where the integration runs. Load balancing is applied when incoming traffic is forwarded to replica pods, and the routing algorithm used depends on the type of security you've configured for your flows:

  • http flows: These flows use a non-SSL route that defaults to a round-robin approach where each replica is sent a message in turn.
  • https flows (integration servers at version 12.0.2.0-r1 or earlier): These flows use an SSL-passthrough route that defaults to the source approach in which the source IP address is used. This means that a single source application will feed a specific replica.
  • https flows (integration servers at version 12.0.2.0-r2 or later): These flows use an SSL-passthrough route that has been modified to use the round-robin approach in which each replica is sent a message in turn.

To change the load balancing configuration that a route uses, you can add an appropriate annotation to the route resource. For example, the following CR setting will switch the route to use the round-robin algorithm:

spec:
  annotations:
    haproxy.router.openshift.io/balance: roundrobin

For more information about the available annotation options, see Route-specific annotations in the Red Hat OpenShift documentation.