App Connect Integration Runtime reference

Introduction

The App Connect Integration Runtime API enables you to create integration runtimes, which run integrations that were created in App Connect Designer or IBM® App Connect Enterprise Toolkit. Integration runtimes enable you to run an integration solution as an always-on deployment that is always running or available in your cluster. Integration runtimes also provide serverless support with Knative, which enables you to deploy API flows from App Connect Designer into containers that start on demand when requests are received. You can also configure OpenTelemetry tracing of Toolkit flows to make your integration runtime deployments observable.

Note:

Integration runtimes can also be created, updated, or deleted from an App Connect Dashboard instance. For more information, see:

In the Dashboard, integration runtimes run as always-on deployments. Serverless Knative Service deployments, which run Designer API flows only, are not supported in the Dashboard and can be created only from the Red Hat OpenShift web console or CLI. Serverless integration runtimes are also not visible in the Dashboard.

Prerequisites

Red Hat OpenShift SecurityContextConstraints requirements

IBM App Connect runs under the default restricted SecurityContextConstraints.

Resources required

Minimum recommended requirements:

  • Toolkit integration for compiled BAR files:
    • CPU: 0.1 Cores
    • Memory: 0.35 GB
  • Toolkit integration for uncompiled BAR files:
    • CPU: 0.3 Cores
    • Memory: 0.35 GB
  • Designer-only integration or hybrid integration:
    • CPU: 1.7 Cores
    • Memory: 1.77 GB

For information about how to configure these values, see Custom resource values.

If you are building and running your own containers, you can choose to allocate less than 0.1 Cores for Toolkit integrations if necessary. However, this decrease in CPU for the integration runtime container might impact the startup times and performance of your flow. If you begin to experience issues that are related to performance, or with starting and running your integration runtime, check whether the problem persists by first increasing the CPU to at least 0.1 Cores before contacting IBM support.

Mechanisms for providing BAR files to an integration runtime

Integration servers and integration runtimes require two types of resources: BAR files that contain development resources, and configuration files (or objects) for setting up the integration servers or integration runtimes. When you create an integration server or integration runtime, you are required to specify one or more BAR files that contain the development resources of the App Connect Designer or IBM App Connect Enterprise Toolkit integrations that you want to deploy.

A number of mechanisms are available for providing these BAR files to integration servers and integration runtimes. Choose the mechanism that meets your requirements.

Mechanism Description BAR files per integration server or integration runtime

Content server

When you use the App Connect Dashboard to upload or import BAR files for deployment to integration servers or integration runtimes, the BAR files are stored in a content server that is associated with the Dashboard instance. The content server is created as a container in the Dashboard deployment and can either store uploaded (or imported) BAR files in a volume in the container’s file system, or store them within a bucket in a simple storage service that provides object storage through a web interface.

The location of a BAR file in the content server is generated as a BAR URL when a BAR file is uploaded or imported to the Dashboard. This location is specified by using the Bar URL field or spec.barURL parameter in the Dashboard custom resource (CR).

While creating an integration server or integration runtime, you can choose only one BAR file to deploy from the content server and must reference its BAR URL in the content server. The integration server or runtime then uses this BAR URL to download the BAR file on startup, and processes the applications appropriately.

If you are creating an integration server or integration runtime from the Dashboard, and use the Integrations view to specify a single BAR file to deploy, the location of this file in the content server will be automatically set in the Bar URL field or spec.barURL parameter in the Properties (or Server) view. For more information, see Creating an integration server to run your BAR file resources (for Designer integrations), Creating an integration server to run IBM App Connect Enterprise Toolkit integrations, and Creating an integration runtime to run your BAR file resources.

If you are creating an integration server or integration runtime from the Red Hat OpenShift web console or CLI, or the Kubernetes CLI, and want to deploy a BAR file from the content server, you must obtain the BAR file location from the BAR files page (which presents a view of the content server) in the Dashboard. You can do so by using Display BAR URL in the BAR file's options menu to view and copy the supplied URL. You can then paste this value in spec.barURL in the integration server or integration runtime custom resource (CR). For more information, see Integration Server reference: Creating an instance and Integration Runtime reference: Creating an instance.

The location of a BAR file in the content server is typically generated in the following format:

  • Integration server:

    https://dashboardName-dash:3443/path?token

  • Integration runtime:

    https://dashboardName-dash.namespaceName:3443/path?token

Where:
  • dashboardName is the Dashboard name (that is, the metadata.name value).
  • path is a generated (and static) path.
  • token is a generated (and static) token. (This token is also stored in the content server.)
  • namespaceName is the namespace (or project) where the Dashboard is deployed.

For example:

https://mydashboardname-dash:3443/v1/directories/CustomerDbV1?0a892497-ea3b-4961-aefb-bc0c36479678

https://mydashboardname-dash.ace-test:3443/v1/directories/CustomerDatabaseV1?9b7aa053-656d-4a30-a31c-123a45f8ebfd

1

External repository

(Applicable only if spec.version resolves to 12.0.1.0-r1 or later)

While creating an integration server or integration runtime, you can choose to deploy multiple BAR files, which are stored in an external HTTPS repository system, to the integration server or integration runtime. You might find this option useful if you have set up continuous integration and continuous delivery (CI/CD) pipelines to automate and manage your DevOps processes, and are building and storing BAR files in a repository system such as JFrog Artifactory.

This option enables you to directly reference one or more BAR files in your integration server or integration runtime CR without the need to manually upload or import the BAR files to the content server in the App Connect Dashboard or build a custom image. You will need to provide basic (or alternative) authentication credentials for connecting to the external endpoint where the BAR files are stored, and can do so by creating a configuration object of type BarAuth. When you create your integration server or integration runtime, you must then reference this configuration object.

If you are creating an integration server or integration runtime from the Dashboard, you can use the Configuration view to create (and select) a configuration object of type BarAuth that defines the required credentials. You can then use the Properties (or Server) view to specify the endpoint locations of one or more BAR files in the Bar URL field or as the spec.barURL value. If you want to be able to use the App Connect Dashboard to view your integration server or integration runtime, also set the following parameter:
  • Integration server:

    Ensure that spec.createDashboardUsers is set to true.

  • Integration runtime:

    Ensure that spec.dashboardUsers.bypassGenerate is set to false.

For more information, see BarAuth type, Creating an integration server to run your BAR file resources (for Designer integrations), Creating an integration server to run IBM App Connect Enterprise Toolkit integrations, and Creating an integration runtime to run your BAR file resources.
If you are creating an integration server or integration runtime from the Red Hat OpenShift web console or CLI, or the Kubernetes CLI, you must create a configuration object of type BarAuth that defines the required credentials, as described in Configuration reference and BarAuth type. When you create the integration server or integration runtime CR, you must specify the name of the configuration object in spec.configurations and then specify the endpoint locations of one or more BAR files in spec.barURL. If you want to be able to use the App Connect Dashboard to view your integration server or integration runtime, also set the following parameter:
  • Integration server:

    Ensure that spec.createDashboardUsers is set to true.

  • Integration runtime:

    Ensure that spec.dashboardUsers.bypassGenerate is set to false.

For more information, see Integration Server reference: Creating an instance and Integration Runtime reference: Creating an instance.
You can specify multiple BAR files as follows:
  • Integration server:

    Specify the URLs in the Bar URL field or in spec.barURL by using a comma-separated list; for example:

    https://artifactory.com/myrepo/getHostAPI.bar,https://artifactory.com/myrepo/CustomerDbV1.bar

  • Integration runtime:
    Specify each URL in a separate Bar URL field by using the Add button, or specify the URLs in spec.barURL as shown in the following example:
    spec:
      barURL:
        - 'https://artifactory.com/myrepo/getHostAPI.bar'
        - 'https://artifactory.com/myrepo/CustomerDbV1.bar'
Tip: If you are using GitHub as an external repository, you must specify the raw URL in the Bar URL field or in spec.barURL. For example:
https://raw.github.ibm.com/somedir/main/bars/getHostAPI.bar
https://github.com/johndoe/somedir/raw/main/getHostAPI.bar
https://raw.githubusercontent.com/myusername/myrepo/main/My%20API.bar

Some considerations apply if deploying multiple BAR files:

  • Ensure that all of the applications can coexist (with no names that clash).
  • Ensure that you provide all of the configurations that are needed for all of the BAR files.
  • All of the BAR files must be accessible by using the single set of credentials that are specified in the configuration object of type BarAuth.

Multiple

Custom image

You can build a custom server runtime image that contains all the configuration for the integration server or integration runtime, including all the BAR files or applications that are required, and then use this image to deploy an integration server or integration runtime.

When you create the integration server or integration runtime CR, you must reference this image by using the following parameter:
  • Integration server:

    spec.pod.containers.runtime.image

  • Integration runtime:

    spec.template.spec.containers[].image

For example:

image-registry.openshift-image-registry.svc:5000/imageName

This image must be built from the version that is specified as the spec.version value in the CR. Channels are not supported when custom images are used.

Multiple

Pulling BAR files from a repository that is located behind a proxy

If you configure an integration runtime at version 13.0.2.0-r1 or later to pull BAR files from a file repository that sits behind a proxy, additional configuration steps are required.

Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).

Complete the following steps:

  1. Update the Subscription resource for the IBM App Connect Operator with the following environment variables to enable the Operator to pull the BAR files and analyze them to identify what type of flows are configured. 

    1. From the command line, log in to your cluster by using the oc login command or the relevant command for your Kubernetes environment.
    2. Run the oc edit or kubectl edit command to partially update the subscription, where namespaceName is the namespace where the Operator is installed.
      OpenShift-only content
      oc edit subscription ibm-appconnect -n namespaceName
      Kubernetes-only content
      kubectl edit subscription ibm-appconnect -n namespaceName

      The Subscription CR automatically opens in the default text editor for your operating system.

    3. Update the spec section of the file as follows.
      Note:
      • To obtain the values (valueX and valueY) for HTTP_PROXY and HTTPS_PROXY, contact your application team that maintains the proxy.
      • If you are running services that need to be connected to directly, additional endpoints might be required for NO_PROXY.
      spec:
        config:
          env:
            - name: HTTP_PROXY
              value: valueX
            - name: HTTPS_PROXY
              value: valueY
            - name: NO_PROXY
              value: .cluster.local,.svc,10.0.0.0/8,127.0.0.1,172.0.0.0/8,192.168.0.0/16,localhost
    4. Save the YAML definition and close the text editor to apply the changes.
  2. For every integration runtime that references a BAR file which is located behind a proxy, add the proxy to the IntegrationRuntime CR definition by updating the spec section to declare a set of environment variables.
    Note:
    • To obtain the values (valueX and valueY) for HTTP_PROXY and HTTPS_PROXY, contact your application team that maintains the proxy.
    • If you are running services that need to be connected to directly, additional endpoints might be required for NO_PROXY.
    spec:
      template:
        spec:
          containers:
            - name: runtime 
              env:
                - name: HTTP_PROXY
                  value: valueX
                - name: HTTPS_PROXY
                  value: valueY
                - name: NO_PROXY
                  value: .cluster.local,.svc,10.0.0.0/8,127.0.0.1,172.0.0.0/8,192.168.0.0/16,localhost

    You can declare these environment variables while creating the integration runtime or you can update the integration runtime by editing its CR. For example, you can run the oc edit or kubectl edit command to partially update the instance, where instanceName is the name (metadata.name value) of the instance and namespaceName is the namespace where the instance is deployed.

    OpenShift-only content
    oc edit integrationruntime instanceName -n namespaceName
    Kubernetes-only content
    kubectl edit integrationruntime instanceName -n namespaceName

Configuring OpenTelemetry tracing for integration runtimes

If you are deploying an integration runtime for a Toolkit integration, you can configure OpenTelemetry tracing for all message flows in the integration runtime, and export span data to an OpenTelemetry collector.

When OpenTelemetry is enabled for an integration runtime, spans are created for all message flow nodes that support OpenTelemetry, including callable flow nodes.

For supported transport input nodes, spans are created and then run until the message flow transaction for the flow is completed or rolled back. For supported transport request nodes, spans are created and then run for the duration of the node interaction with the transport (for example, sending and receiving an HTTP message). For information about the transport message flow nodes that support OpenTelemetry trace, see Configuring OpenTelemetry trace for an integration server in the IBM App Connect Enterprise documentation.

An Observability agent or backend system such as Instana, Jaeger, or Zipkin must be available and configured to receive OpenTelemetry data. When you deploy an integration runtime, configuration options are provided for you to enable OpenTelemetry tracing.

Instana-specific configuration

If you are using an Instana agent for observability, you must activate OpenTelemetry support in the host agent to enable the reception of OpenTelemetry data in that host agent. You can do so by updating your host agent's configuration.yaml file as follows:
com.instana.plugin.opentelemetry:
  enabled: true

For more information, see Activating OpenTelemetry Support in the Instana documentation.

Configuring Knative serverless support on Red Hat OpenShift

IBM App Connect provides support for serverless deployments of your integration runtimes by using Knative. Knative is an open source solution that provides tools and utilities to enable containers to run as serverless workloads on Kubernetes clusters. This model enables resources to be provisioned on demand, and to be scaled based on requests.

Restriction:
  • Serverless support is available only on Red Hat OpenShift.
  • Serverless support is available only with the AppConnectEnterpriseNonProductionFREE and CloudPakForIntegrationNonProductionFREE licenses.
  • Serverless support is restricted to BAR files that contain API flows only, which are exported from App Connect Designer or App Connect on IBM Cloud. As an additional restriction, BAR files that were created before May 2020 are not supported in their initial state and need to be updated to make them suitable for serverless deployment. For more information, see Configuration requirement for BAR files that were created before May 2020.

As a one-time requirement for creating serverless deployments of integration runtimes, you must install the Knative components, which include Knative Serving, in your cluster, and enable required Knative flags. You must also ensure that a pull secret is available for accessing the required App Connect images. For more information, see Installing and configuring Knative Serving in your Red Hat OpenShift cluster.

For each serverless deployment that you want to create for an integration runtime, you must also enable (and optionally configure) serverless Knative service settings in the custom resource, as described in Creating an instance. The installed Knative Serving component will deploy and run containers as Knative services, and manage the state, revision tracking, routing, and scaling of these services.

To see an example that describes how to configure, run, and test a serverless deployment, see the Introduction to Serverless with the App Connect Operator blog.

Configuration requirement for BAR files that were created before May 2020

Older Designer API BAR files that were created before May 2020 need to be updated to make them valid for serverless deployment. Before attempting a serverless deployment of such a BAR file, you must update the BAR file configuration by running the ibmint apply overrides command to override a set of values in the broker archive deployment descriptor:

  1. Create a text file with a preferred name (filename.txt), to define the overrides to apply.
  2. To view the properties that you need to override in the BAR file (for example, flow_name.bar), complete the following steps:
    1. Extract the contents of the BAR file to a preferred directory. Then, locate the compressed file named flow_name.appzip and extract its contents, which include a META-INF/broker.xml file.
    2. Open the META-INF/broker.xml file to view the configurable properties, which are specified in the following format:

      <ConfigurableProperty uri="xxxx"/>

      <ConfigurableProperty override="someValue" uri="xxxx"/>

    3. Locate each entry with a uri value that is specified as follows, where subflowName identifies an operation from the original API flow:
      {{subflowName}}#HTTP Request.URLSpecifier
      {{subflowName}}#HTTP Request (no auth).URLSpecifier

      Typically, each operation or subflowName contains a pair of HTTP Request.URLSpecifier and HTTP Request (no auth).URLSpecifier entries. For example, if the original API flow defines two operations to create a customer and retrieve a customer by ID, the relevant entries in the META-INF/broker.xml file might look like this:

      ...
      <ConfigurableProperty override="http+unix://%2Ftmp%2Flmap.socket" uri="Customer_create#HTTP Request.URLSpecifier"/>
      <ConfigurableProperty override="http+unix://%2Ftmp%2Flmap.socket" uri="Customer_create#HTTP Request (no auth).URLSpecifier"/>
      ...
      <ConfigurableProperty override="http+unix://%2Ftmp%2Flmap.socket" uri="Customer_findById#HTTP Request.URLSpecifier"/>
      <ConfigurableProperty override="http+unix://%2Ftmp%2Flmap.socket" uri="Customer_findById#HTTP Request (no auth).URLSpecifier"/>
      ...
  3. In the filename.txt file, add lines in the following format to override these properties:
    subflowName#HTTP Request.URLSpecifier=http://localhost:3002
    subflowName#HTTP Request (no auth).URLSpecifier=http://localhost:3002

    For example:

    Customer_create#HTTP Request.URLSpecifier=http://localhost:3002
    Customer_create#HTTP Request (no auth).URLSpecifier=http://localhost:3002
    Customer_findById#HTTP Request.URLSpecifier=http://localhost:3002
    Customer_findById#HTTP Request (no auth).URLSpecifier=http://localhost:3002

  4. Save and close the filename.txt override file.
  5. Run the following command to apply the overrides to the BAR file:
    ibmint apply overrides overridesFilePath --input-bar-file barfilePath --output-bar-file newbarfilePath
    Where:
    • overridesFilePath is the name and path of the filename.txt override file.
    • barfilePath is the name and path of the existing flow_name.bar file that you want to update.
    • newbarfilePath is the name and path of a new BAR file that will be generated with the updated configuration.

    For example:

    ibmint apply overrides /some/path/filename.txt --input-bar-file /some/path/flow_name.bar --output-bar-file /some/path/newflow_name.bar
  6. Use the updated BAR file to create your integration runtime.

Creating an instance

You can create an integration runtime by using the Red Hat OpenShift web console or CLI, or the CLI for a Kubernetes environment.

The supplied App Connect Enterprise base image includes an IBM MQ client for connecting to remote queue managers that are within the same Red Hat OpenShift cluster as your deployed integration runtimes, or in an external system such as an appliance.

Before you begin

  • Ensure that the Prerequisites are met.
  • Prepare the BAR files that you want to deploy to the integration runtime. For more information, see Mechanisms for providing BAR files to an integration runtime.
  • Decide how to control upgrades to the instance when a new version becomes available. The spec.version value that you specify while creating the instance will determine how that instance is upgraded after installation, and whether you will need to specify a different license or version number for the upgrade. To help you decide whether to specify a spec.version value that either lets you subscribe to a channel for updates, or that uses a specific version for the instance, review the Upgrade considerations for channels, versions, and licenses before you start this task.
    Namespace restriction for an instance, server, configuration, or trace:

    The namespace in which you create an instance or object must be no more than 40 characters in length.

Creating an instance from the Red Hat OpenShift web console

When you create an integration runtime, you can define which configurations you want to apply to the runtime.

  • If required, you can create configuration objects before you create an integration runtime and then add references to those objects while creating the runtime. For information about how to use the Red Hat OpenShift web console to create a configuration object before you create an integration runtime, see Creating a configuration object from the Red Hat OpenShift web console.
  • If you have existing configuration objects that you want the integration runtime to reference, you can add those references while creating the runtime, as described in the steps that follow.

To create an integration runtime by using the Red Hat OpenShift web console, complete the following steps:

  1. From a browser window, log in to the Red Hat OpenShift Container Platform web console. Ensure that you are in the Administrator perspective Administrator perspective of the web console.
  2. From the navigation, click Operators > Installed Operators.
  3. If required, select the namespace (project) in which you installed the IBM App Connect Operator.
  4. From the Installed Operators page, click IBM App Connect.
  5. From the Operator details page for the App Connect Operator, click the Integration Runtime tab. Any previously created integration runtimes are displayed in a table.
  6. Click Create IntegrationRuntime.

    From the Details tab on the Operator details page, you can also locate the Integration Runtime tile and click Create instance to specify installation settings for the integration runtime.

  7. As a starting point, click YAML view to switch to the YAML editor. Then, either copy and paste one of the YAML samples from Creating an instance from the Red Hat OpenShift or Kubernetes CLI into the YAML editor, or try one of the YAML samples from the Samples tab.

    Update the content of the YAML editor with the parameters and values that you require for this integration runtime. The YAML editor offers a finer level of control over your installation settings, but you can switch between this view and the Form view.

    • To view the full set of parameters and values available, see Custom resource values.
    • For licensing information, see Licensing reference for IBM App Connect Operator.
    • Specify the locations of one or more BAR files that you want to deploy. You can use the spec.barURL parameter to either specify the URL to a BAR file that is stored in the content server, or specify one or more BAR files in an external endpoint, as shown in the following examples. If you are deploying BAR files that are stored in an external endpoint, you will also need a configuration object of type BarAuth that contains credentials for connecting to this endpoint. For more information, see Mechanisms for providing BAR files to an integration runtime.
      spec:
        barURL:
          - >-
            https://db-01-quickstart-dash.ace-test:3443/v1/directories/CustomerDatabaseV1?873fe600-9ac6-4096-c00f-55e361fec2e5
      spec:
        barURL:
          - 'https://artifactory.com/myrepo/getHostAPI.bar'
          - 'https://artifactory.com/myrepo/CustomerDbV1.bar'
    • You can specify one or more (existing) configurations that you want to apply by using the spec.configurations parameter. For example:
      spec:
        configurations:
          - odbc-ini-data

      or

      spec:
        configurations:
          - odbc-ini-data
          - accountsdata

      or

      spec:
        configurations: ["odbc-ini-data", "accountsdata"]

      For the spec.configurations values, specify the metadata.name values for the relevant configuration objects.

      Note:

      If this integration runtime contains a callable flow, you must configure the integration runtime to use a switch server that you created earlier. For information about how to create a switch server, see App Connect Switch Server reference. Locate the name of the switch server and then use it to configure the integration runtime as follows:

      1. From the command line, run the following command, where switchName is the metadata.name value that was specified while creating the switch server:
        oc get switchserver switchName
      2. Make a note of the AGENTCONFIGURATIONNAME value that is shown in the output.
        NAME      RESOLVEDVERSION   CUSTOMIMAGES   STATUS   AGENTCONFIGURATIONNAME   AGE
        default   13.0.6.0-r1       false          Ready    default-agentx           1h
      3. Add the AGENTCONFIGURATIONNAME value to the spec.configurations parameter; for example:
        configurations:
          - default-agentx

      A configuration object of type REST Admin SSL files (or adminssl) is created and applied by default to the integration runtime to provide self-signed TLS certificates for secure communication between the App Connect Dashboard and the runtime. This configuration object is created from a predefined ZIP archive, which contains a set of PEM files named ca.crt.pem, tls.crt.pem, and tls.key.pem. A secret is also auto generated to store the Base64-encoded content of this ZIP file. When you deploy the integration runtime, the configuration name is stored in spec.configurations as integrationRuntimeName-ir-adminssl, where integrationRuntimeName is the metadata.name value for the integration runtime. For more information about this configuration type, see REST Admin SSL files type.

    • If you are deploying an integration runtime for a Toolkit integration and want to configure OpenTelemetry tracing for all message flows, you can use the spec.telemetry.tracing.openTelemetry.* parameters to enable OpenTelemetry tracing and configure your preferred settings. An example of the standard settings is as follows:
      spec:
        telemetry:
          tracing:
            openTelemetry:
              tls:
                secretName: mycert-secret
                caCertificate: ca.crt
              enabled: true
              protocol: grpc
              endpoint: 'status.hostIP:4317'
      Note:
      • If you are using Instana as your Observability agent, setting spec.telemetry.tracing.openTelemetry.enabled to true is typically the only configuration needed for OpenTelemetry tracing, and you do not need to configure any other settings.
      • In an Cloud Pak for Integration environment with Instana configured, one Instana agent typically runs on each physical worker node in the cluster by default. In this scenario, it is advisable to leave spec.telemetry.tracing.openTelemetry.endpoint unspecified when OpenTelemetry tracing is enabled. This results in the container being configured to use the Instana agent that is on the physical worker where the container is started. (In most cases, the agent will be available locally on the worker node where App Connect is running.) If preferred, you can use spec.telemetry.tracing.openTelemetry.endpoint to specify a different IP address and port for the agent (on a different physical worker node).
      • You can configure additional OpenTelemetry properties by using a server.conf.yaml file, which contains an additional set of OpenTelemetry properties. To configure these additional properties, use the server.conf.yaml file to create a configuration object of type server.conf.yaml that can be applied to the integration runtime. For more information, see Configuring additional OpenTelemetry properties by using the server.conf.yaml file.
    • If you want to add one or more topology spread constraints that control how to distribute or spread pods across topological domains such as zones, nodes, or regions in a multi-zone or multi-node cluster, use the Template/spec/Advanced configuration/Topology Spread Constraints fields or the spec.template.spec.topologySpreadConstraint[].* parameters. You can use these settings to configure high availability and fault tolerance by spreading workloads evenly across domains. These settings are applicable only for a channel or version that resolves to 13.0.4.1-r1 or later. For information about the values that you can specify, see the spec.template.spec.topologySpreadConstraint[].* parameters in Custom resource values.
    • If you want to create a Knative serverless deployment for an integration runtime that contains a Designer API flow on Red Hat OpenShift, you can enable Knative Serving by setting the spec.serverless.knativeService.enabled parameter to true. For example:
      spec:
        serverless:
          knativeService:
            enabled: true

      Knative Serving must be installed and configured in your cluster. For more information, see Configuring Knative serverless support on Red Hat OpenShift and Installing and configuring Knative Serving in your Red Hat OpenShift cluster.

    • Desired run state (default: running): Use this field to indicate that you want to stop or start the integration runtime:
      • stopped: Stops the integration runtime if running, scales down its replica pods to zero (0), and changes the state to Stopped. (The original number of replicas is retained in the spec.replicas setting in the integration runtime CR.)

        You can choose to create your integration runtime in a Stopped state if preferred, or you can edit the settings of your running integration runtime to stop it.

      • running: Starts the integration runtime when in a Stopped state, starts the original number of replica pods (as defined in spec.replicas), and changes the state to Ready. This is the default option so if you leave the field blank, a value of running is assumed.

      The stop or start action is applied to all replica pods that are provisioned for the integration runtime to ensure that a consistent state is maintained across the pods.

      This field is applicable only for a channel or version that resolves to 13.0.2.2-r1 or later.

    • Ingress: Use these fields to automatically create ingress resources for your deployed integration runtime. In Kubernetes environments, ingress resources are required to expose your integration runtimes to external traffic.

      These fields are applicable only for an IBM Cloud Kubernetes Service environment, and for a channel or version that resolves to 13.0.2.1-r1 or later.

      • Ingress/Enabled: Indicate whether to automatically create ingress resources for your deployed integration runtime. The creation of ingress resources for an integration runtime is disabled by default.
      • Ingress/Domain: If you do not want the ingress routes to be constructed with the IBM-provided ingress subdomain of your IBM Cloud Kubernetes Service cluster, specify a preferred custom subdomain that is created in the cluster.

        This field is displayed only when Ingress/Enabled is set to true.

    • NodePort Service/Advanced configuration/List of custom ports to expose: Use these fields to specify an array of custom ports for a dedicated NodePort service that is created for accessing the set of pods. Click Add List of custom ports to expose to display a group of fields for the first port definition on the service. If you need to expose more than one port for the NodePort service, use Add List of custom ports to expose to configure additional port definitions.

      When you complete the NodePort Service fields, the IBM App Connect Operator automatically creates a Kubernetes service with the defined ports and with a virtual IP address for access. The NodePort service is created in the same namespace as the integration runtime and is named in the format integrationRuntimeName-np-ir.

      For more information about completing these fields, see the spec.nodePortService.ports[].* parameters in Custom resource values.

    • Java/Java version: Use this field to specify which Java Runtime Environment (JRE) version to use in the deployment and the runtime pods for the integration runtime. This field is applicable only for a channel or version that resolves to 13.0.5.0-r1 or later.

      Limitations apply to some message flow nodes and capabilities, and to some types of security configuration when certain JRE versions are used with IBM App Connect Enterprise. Use this field to specify a Java version that is compatible with the content of the Toolkit integration that is deployed as an integration runtime. Applications that are deployed to an integration runtime might fail to start if the Java version is incompatible.

      Valid values are 8 or 17 (the default). If you leave this field blank, the default value is used.

  8. To use the Form view, ensure that Form view is selected and then complete the fields. Note that some fields might not be represented in the form.
  9. Click Create to start the deployment. An entry for the integration runtime is shown in the IntegrationRuntimes table, initially with a Pending status.
  10. Click the integration runtime name to view information about its definition and current status.

    On the Details tab of the page, the Conditions section reveals the progress of the deployment. You can use the breadcrumb trail to return to the (previous) Operator details page for the App Connect Operator.

    When the deployment is complete, the status is shown as Ready in the IntegrationRuntimes table.

Creating an instance from the Red Hat OpenShift or Kubernetes CLI

When you create an integration runtime, you can define which configurations you want to apply to the integration sever.

  • If required, you can create configuration objects before you create an integration runtime and then add references to those objects while creating the integration sever. For information about how to use the CLI to create a configuration object before you create an integration runtime, see Creating a configuration object from the Red Hat OpenShift CLI.
  • If you have existing configuration objects that you want the integration sever to reference, you can add those references while creating the integration runtime, as described in the steps that follow.

To create an integration runtime by using the Red Hat OpenShift or Kubernetes CLI, complete the following steps.

  1. From your local computer, create a YAML file that contains the configuration for the integration runtime that you want to create. Include the metadata.namespace parameter to identify the namespace in which you want to create the integration runtime; this should be the same namespace where the other App Connect instances or resources are created.

    The following examples (Example 1 and Example 2) show an integration runtime CR for a standard Designer or Toolkit integration, with no requirement for OpenTelemetry tracing or serverless support.

    OpenShift-only contentExample 1:
    apiVersion: appconnect.ibm.com/v1beta1
    kind: IntegrationRuntime
    metadata:
      name: customer-api
      namespace: ace-test
    spec:
      license:
        accept: true
        license: L-CKFT-S6CHZW
        use: CloudPakForIntegrationNonProduction
      template:
        spec:
          containers:
            - name: runtime
              resources:
                requests:
                  cpu: 300m
                  memory: 368Mi
      logFormat: basic
      barURL:
        - >-
          https://db-01-quickstart-dash.ace-test:3443/v1/directories/Customer_API?fbcea793-8eab-435f-8ba7-b97ee92cc0e4
      configurations:
        - customer-api-salesforce-acct
      version: '13.0'
      replicas: 1
    Kubernetes-only contentOpenShift-only contentExample 2:
    apiVersion: appconnect.ibm.com/v1beta1
    kind: IntegrationRuntime
    metadata:
      name: customer-api
      namespace: ace-test
    spec:
      license:
        accept: true
        license: L-CKFT-S6CHZW
        use: AppConnectEnterpriseProduction
      template:
        spec:
          containers:
            - name: runtime
              resources:
                requests:
                  cpu: 300m
                  memory: 368Mi
      logFormat: basic
      barURL:
        - >-
          https://db-01-quickstart-dash.ace-test:3443/v1/directories/Customer_API?fbcea793-8eab-435f-8ba7-b97ee92cc0e4
      configurations:
        - customer-api-salesforce-acct
      version: '13.0'
      replicas: 1

    The following examples (Example 3 and Example 4) show an integration runtime CR with settings to enable OpenTelemetry tracing on a Toolkit integration, in an environment where Instana is configured as the Observability agent.

    OpenShift-only contentExample 3:
    apiVersion: appconnect.ibm.com/v1beta1
    kind: IntegrationRuntime
    metadata:
      name: ir-01-quickstart-otel
      namespace: ace-test
    spec:
      license:
        accept: true
        license: L-CKFT-S6CHZW
        use: CloudPakForIntegrationNonProduction
      telemetry:
        tracing:
          openTelemetry:
            enabled: true
      template:
        spec:
          containers:
            - resources:
                requests:
                  cpu: 300m
                  memory: 368Mi
              name: runtime
      logFormat: basic
      barURL:
        - >-
          https://db-01-quickstart-dash.ace-test:3443/v1/directories/CustomerDatabaseV1?52370379-f412-463e-89bb-03bb56cd03b5
      configurations:
        - my-odbc
        - my-setdbp
      version: '13.0'
      replicas: 1
    Kubernetes-only contentOpenShift-only contentExample 4:
    apiVersion: appconnect.ibm.com/v1beta1
    kind: IntegrationRuntime
    metadata:
      name: ir-01-quickstart-otel
      namespace: ace-test
    spec:
      license:
        accept: true
        license: L-CKFT-S6CHZW
        use: AppConnectEnterpriseProduction
      telemetry:
        tracing:
          openTelemetry:
            enabled: true
      template:
        spec:
          containers:
            - resources:
                requests:
                  cpu: 300m
                  memory: 368Mi
              name: runtime
      logFormat: basic
      barURL:
        - >-
          https://db-01-quickstart-dash.ace-test:3443/v1/directories/CustomerDatabaseV1?52370379-f412-463e-89bb-03bb56cd03b5
      configurations:
        - my-odbc
        - my-setdbp
      version: '13.0'
      replicas: 1

    The following examples (Example 5 and Example 6) show an integration runtime CR with settings to enable the serverless deployment of a BAR file that contains an API flow that was exported from a Designer instance. The BAR file is stored in a GitHub repository, so a configuration object of type BarAuth, which contains credentials for connecting to GitHub, is required. Another configuration object of type Accounts, which contains account details for connecting to the applications that are referenced in the Designer flow, is required.

    OpenShift-only contentExample 5:
    apiVersion: appconnect.ibm.com/v1beta1
    kind: IntegrationRuntime
    metadata:
      name: test-apiflow-serverless
      namespace: ace-test
    spec:
      license:
        accept: true
        license: L-CKFT-S6CHZW
        use: CloudPakForIntegrationNonProductionFREE
      logFormat: json
      barURL:
        - >-
          https://raw.github.ibm.com/somedir/main/bars/Customer_API.bar
      configurations:
        - customer-api-salesforce-acct
        - barauth-config
      serverless:
        knativeService:
          enabled: true
      version: '13.0'
      replicas: 1
    Kubernetes-only contentOpenShift-only contentExample 6:
    apiVersion: appconnect.ibm.com/v1beta1
    kind: IntegrationRuntime
    metadata:
      name: test-apiflow-serverless
      namespace: ace-test
    spec:
      license:
        accept: true
        license: L-CKFT-S6CHZW
        use: AppConnectEnterpriseNonProductionFREE
      logFormat: json
      barURL:
        - >-
          https://raw.github.ibm.com/somedir/main/bars/Customer_API.bar
      configurations:
        - customer-api-salesforce-acct
        - barauth-config
      serverless:
        knativeService:
          enabled: true
      version: '13.0'
      replicas: 1

    To see an example of an integration runtime CR with settings to enable ingress for an integration runtime in an IBM Cloud Kubernetes Service cluster, see Automatically creating ingress definitions for IBM App Connect instances in an IBM Cloud Kubernetes Service cluster.

    • To view the full set of parameters and values that you can specify, see Custom resource values.
    • For licensing information, see Licensing reference for IBM App Connect Operator.
    • Specify the locations of one or more BAR files that you want to deploy. You can use the spec.barURL parameter to either specify the URL to a BAR file that is stored in the content server, or specify one or more BAR files in an external endpoint, as shown in the following examples. If you are deploying BAR files that are stored in an external endpoint, you will also need a configuration object of type BarAuth that contains credentials for connecting to this endpoint. For more information, see Mechanisms for providing BAR files to an integration runtime.
      spec:
        barURL:
          - >-
            https://db-01-quickstart-dash.ace-tes:3443/v1/directories/CustomerDatabaseV1?873fe600-9ac6-4096-c00f-55e361fec2e5
      spec:
        barURL:
          - 'https://artifactory.com/myrepo/getHostAPI.bar'
          - 'https://artifactory.com/myrepo/CustomerDbV1.bar'
    • You can specify one or more (existing) configurations that you want to apply by using the spec.configurations parameter. For example:
      spec:
        configurations:
          - odbc-ini-data

      or

      spec:
        configurations:
          - odbc-ini-data
          - accountsdata

      or

      spec:
        configurations: ["odbc-ini-data", "accountsdata"]

      For the spec.configurations values, specify the metadata.name values for the relevant configuration objects.

      Note:

      If this integration runtime contains a callable flow, you must configure the integration runtime to use a switch server that you created earlier. For information about how to create a switch server, see App Connect Switch Server reference. Locate the name of the switch server and then use it to configure the integration runtime as follows:

      1. From the command line, run the following command, where switchName is the metadata.name value that was specified while creating the switch server:
        OpenShift-only content
        oc get switchserver switchName
        Kubernetes-only content
        kubectl get switchserver switchName
      2. Make a note of the AGENTCONFIGURATIONNAME value that is shown in the output.
        NAME      RESOLVEDVERSION   CUSTOMIMAGES   STATUS   AGENTCONFIGURATIONNAME   AGE
        default   13.0.6.0-r1       false          Ready    default-agentx           1h
      3. Add the AGENTCONFIGURATIONNAME value to the spec.configurations parameter; for example:
        configurations:
          - default-agentx

      A configuration object of type REST Admin SSL files (or adminssl) is created and applied by default to the integration runtime to provide self-signed TLS certificates for secure communication between the App Connect Dashboard and the runtime. This configuration object is created from a predefined ZIP archive, which contains a set of PEM files named ca.crt.pem, tls.crt.pem, and tls.key.pem. A secret is also auto generated to store the Base64-encoded content of this ZIP file. When you deploy the integration runtime, the configuration name is stored in spec.configurations as integrationRuntimeName-ir-adminssl, where integrationRuntimeName is the metadata.name value for the integration runtime. For more information about this configuration type, see REST Admin SSL files type.

    • If you are deploying an integration runtime for a Toolkit integration and want to configure OpenTelemetry tracing for all message flows, you can use the spec.telemetry.tracing.openTelemetry.* parameters to enable OpenTelemetry tracing and configure your preferred settings. An example of the standard settings is as follows:
      spec:
        telemetry:
          tracing:
            openTelemetry:
              tls:
                secretName: mycert-secret
                caCertificate: ca.crt
              enabled: true
              protocol: grpc
              endpoint: 'status.hostIP:4317'
      Note:
      • If you are using Instana as your Observability agent, setting spec.telemetry.tracing.openTelemetry.enabled to true is typically the only configuration needed for OpenTelemetry tracing, and you do not need to configure any other settings.
      • In an Cloud Pak for Integration environment with Instana configured, one Instana agent typically runs on each physical worker node in the cluster by default. In this scenario, it is advisable to leave spec.telemetry.tracing.openTelemetry.endpoint unspecified when OpenTelemetry tracing is enabled. This results in the container being configured to use the Instana agent that is on the physical worker where the container is started. (In most cases, the agent will be available locally on the worker node where App Connect is running.) If preferred, you can use spec.telemetry.tracing.openTelemetry.endpoint to specify a different IP address and port for the agent (on a different physical worker node).
      • You can configure additional OpenTelemetry properties by using a server.conf.yaml file, which contains an additional set of OpenTelemetry properties. To configure these additional properties, use the server.conf.yaml file to create a configuration object of type server.conf.yaml that can be applied to the integration runtime. For more information, see Configuring additional OpenTelemetry properties by using the server.conf.yaml file.
    • If you want to create a Knative serverless deployment for an integration runtime that contains a Designer API flow on Red Hat OpenShift, you can enable Knative Serving by setting the spec.serverless.knativeService.enabled parameter to true. For example:
      spec:
        serverless:
          knativeService:
            enabled: true

      Knative Serving must be installed and configured in your cluster. For more information, see Configuring Knative serverless support on Red Hat OpenShift and Installing and configuring Knative Serving in your Red Hat OpenShift cluster.

    • Use the spec.desiredRunState parameter to specify whether to stop or start the integration runtime. The stop or start action is applied to all replica pods that are provisioned for the integration runtime to ensure that a consistent state is maintained across the pods.
    • Use the spec.template.spec.topologySpreadConstraint[].* parameters to add one or more topology spread constraints that control how to distribute or spread pods across topological domains such as zones, nodes, or regions in a multi-zone or multi-node cluster. You can use these settings to configure high availability and fault tolerance by spreading workloads evenly across domains.
    • Use the spec.ingress.enabled parameter to indicate whether to automatically create ingress resources for your deployed integration runtime. Use the spec.ingress.domain parameter to specify a custom subdomain to include in the ingress routes that are generated to expose your IBM App Connect instances to external traffic. These parameters are applicable only for an IBM Cloud Kubernetes Service environment.
    • Use the spec.nodePortService.ports[].* parameters to specify an array of custom ports for a dedicated NodePort service that is created for accessing the set of pods. If you need to expose more than one port for the NodePort service, you can configure multiple port definitions as an array.

      When you complete the spec.nodePortService.ports[].* parameters, the IBM App Connect Operator automatically creates a Kubernetes service with the defined ports and with a virtual IP address for access. The NodePort service is created in the same namespace as the integration runtime and is named in the format integrationRuntimeName-np-ir.

      For more information about completing these parameters, see Custom resource values.

    • Use the spec.java.version parameter to specify which Java Runtime Environment (JRE) version to use in the deployment and the runtime pods for the integration runtime.

    You can also choose to define the configurations that you want to apply to the integration runtime within the same YAML file that contains the integration runtime configuration.

    If preferred, you can define multiple configurations and integration runtimes within the same YAML file. Each definition can be separated with three hyphens (---) as shown in the following example. The configurations and integration runtimes will be created independently, but any configurations that you specify for an integration runtime will be applied during deployment. (In the following example, settings are defined for a new configuration and an integration runtime. The integration runtime's spec.configurations setting references the new configuration and an existing configuration that should be applied during deployment.)

    apiVersion: appconnect.ibm.com/v1beta1
    kind: Configuration
    metadata:
      name: setdbp-conf
      namespace: mynamespace
    spec:
      data: ABCDefghIJLOMNehorewirpewpTEV843BCDefghIJLOMNorewirIJLOMNeh842lkalkkrmwo4tkjlfgBCDefghIJLOMNehhIJLOM
      type: setdbparms
    ---
    apiVersion: appconnect.ibm.com/v1beta1
    kind: IntegrationRuntime
    metadata:
      name: customerapi
      namespace: mynamespace
    spec:
      license:
        accept: true
        license: L-CKFT-S6CHZW
        use: CloudPakForIntegrationNonProduction
      template:
        spec:
          containers:
            - name: runtime
              resources:
                requests:
                  cpu: 300m
                  memory: 368Mi
      logFormat: basic
      barURL:
        - >-
          https://db-01-quickstart-dash.mynamespace:3443/v1/directories/CustomerDatabaseV1?12345678-abf5-491d-be0e-219abcde2338
      configurations: ["setdbp-conf", "my-accounts"]
      version: ''13.0''
      replicas: 2
  2. Save this file with a .yaml extension; for example, customerapi_cr.yaml.
  3. From the command line, log in to your cluster by using the oc login command or the relevant command for your Kubernetes environment.
  4. Run the following command to create the integration runtime and apply any defined configurations. (Use the name of the .yaml file that you created.)
    OpenShift-only content
    oc apply -f customerapi_cr.yaml
    Kubernetes-only content
    kubectl apply -f customerapi_cr.yaml
  5. Run the following command to check the status of the integration runtime and verify that it is running:
    OpenShift-only content
    oc get integrationruntimes -n namespace
    Kubernetes-only content
    kubectl get integrationruntimes -n namespace

    You should also be able to view this integration runtime in your App Connect Dashboard instance.

    Note: If you are working in a Kubernetes environment other than IBM Cloud Kubernetes Service, ensure that you create an ingress definition after you create this instance, to make its internal service publicly available. For more information, see Manually creating ingress definitions for external access to your IBM App Connect instances in Kubernetes environments.

    If you are using IBM Cloud Kubernetes Service, you can use the spec.ingress.enabled parameter to enable ingress for this instance to automatically create the required ingress resources. For more information, see Automatically creating ingress definitions for external access to your IBM App Connect instances on IBM Cloud Kubernetes Service.

Updating the custom resource settings for an instance

If you want to change the settings of an existing integration runtime, you can edit its custom resource settings by using the Red Hat OpenShift web console or CLI, or the CLI for a Kubernetes environment. For example, you might want to adjust CPU or memory requests or limits for use within the containers in the deployment.

Restriction: You cannot update standard settings such as the kind of resource (kind), and the name and namespace (metadata.name and metadata.namespace), as well as some system-generated settings, or settings such as the storage type of certain components. An error message will be displayed when you try to save.

Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).


Updating an instance from the Red Hat OpenShift web console

To update an integration runtime by using the Red Hat OpenShift web console, complete the following steps:

  1. From a browser window, log in to the Red Hat OpenShift Container Platform web console. Ensure that you are in the Administrator perspective Administrator perspective of the web console.
  2. From the navigation, click Operators > Installed Operators.
  3. If required, select the namespace (project) in which you installed the IBM App Connect Operator.
  4. From the Installed Operators page, click IBM App Connect.
  5. From the Operator details page for the App Connect Operator, click the Integration Runtime tab.
  6. Locate and click the name of the integration runtime that you want to update.
  7. Click the YAML tab.
  8. Update the content of the YAML editor as required. You can update existing values, add new entries, or delete entries. For information about the available parameters and their values, see Custom resource values.
  9. Click Save to save your changes.

Updating an instance from the Red Hat OpenShift or Kubernetes CLI

To update an integration runtime from the Red Hat OpenShift or Kubernetes CLI, complete the following steps.

  1. From the command line, log in to your cluster by using the oc login command or the relevant command for your Kubernetes environment.
  2. From the namespace where the integration runtime is deployed, run the oc edit or kubectl edit command to partially update the instance, where instanceName is the name (metadata.name value) of the instance.
    OpenShift-only content
    oc edit integrationruntime instanceName
    Kubernetes-only content
    kubectl edit integrationruntime instanceName

    The integration runtime CR automatically opens in the default text editor for your operating system.

  3. Update the contents of the file as required. You can update existing values, add new entries, or delete entries. For information about the available parameters and their values, see Custom resource values.
  4. Save the YAML definition and close the text editor to apply the changes.
Tip:

If preferred, you can also use the oc patch or kubectl patch command to apply a patch with some bash shell features, or use oc apply or kubectl apply with the appropriate YAML settings.

For example, you can save the YAML settings to a file with a .yaml extension (for example, updatesettings.yaml), and then run oc patch or kubectl patch as follows to update the settings for an instance:

OpenShift-only content
oc patch integrationruntime instanceName --type='merge' --patch "$(cat updatesettings.yaml)"
Kubernetes-only content
kubectl patch integrationruntime instanceName --type='merge' --patch "$(cat updatesettings.yaml)"

Deleting an instance

If no longer required, you can delete an integration runtime by using the Red Hat OpenShift web console or CLI, or the CLI for a Kubernetes environment.

Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).

Deleting an instance from the Red Hat OpenShift web console

To delete an integration runtime by using the Red Hat OpenShift web console, complete the following steps:

  1. From a browser window, log in to the Red Hat OpenShift Container Platform web console. Ensure that you are in the Administrator perspective Administrator perspective of the web console.
  2. From the navigation, click Operators > Installed Operators.
  3. If required, select the namespace (project) in which you installed the IBM App Connect Operator.
  4. From the Installed Operators page, click IBM App Connect.
  5. From the Operator details page for the App Connect Operator, click the Integration Runtime tab.
  6. Locate the instance that you want to delete.
  7. Click the options icon (Options menu) to open the options menu, and then click the Delete option.
  8. Confirm the deletion.

Deleting an instance from the Red Hat OpenShift or Kubernetes CLI

To delete an integration runtime by using the Red Hat OpenShift or Kubernetes CLI, complete the following steps.

  1. From the command line, log in to your cluster by using the oc login command or the relevant command for your Kubernetes environment.
  2. From the namespace where the integration runtime instance is deployed, run the following command to delete the instance, where instanceName is the value of the metadata.name parameter.
    OpenShift-only content
    oc delete integrationruntime instanceName
    Kubernetes-only content
    kubectl delete integrationruntime instanceName

Custom resource values

The following table lists the configurable parameters and default values for the custom resource.

In the parameter names, the notation [] depicts an array. For example, the following notation indicates that an array of custom ports can be specified (for a service): spec.service.ports[].fieldName. When used together with spec.service.type, you can specify multiple port definitions as shown in the following example:
spec:
  service:
    ports:
      - protocol: TCP
        name: config-abc
        nodePort: 32000
        port: 9910
        targetPort: 9920
      - protocol: SCTP
        name: config-xyz
        nodePort: 31500
        port: 9376
        targetPort: 9999
    type: NodePort
Parameter Description Default

apiVersion

The API version that identifies which schema is used for this integration runtime.

appconnect.ibm.com/v1beta1

kind

The resource type.

IntegrationRuntime

metadata.name

A unique short name by which the integration runtime can be identified.

metadata.namespace

The namespace (project) in which the integration runtime is deployed.

The namespace in which you create an instance or object must be no more than 40 characters in length.

spec.barURL

Identifies the location of one or more BAR files that can be deployed to the integration runtime. Can be either of these values:

  • The URL of the location where a (single) BAR file is stored in the content server of the App Connect Dashboard; for example:

    https://mydashboardname-dash.ace-test:3443/v1/directories/CustomerDatabaseV1?9b7aa053-656d-4a30-a31c-123a45f8ebfd

    This URL is generated when a BAR file is uploaded to the App Connect Dashboard while deploying an integration runtime or imported from the BAR files page. If you want to use a previously uploaded (or imported) BAR file to deploy an integration runtime, you can display the URL in the Dashboard as described in Managing BAR files, and then set this URL as the value of spec.barURL.

  • One or more co-related BAR files in an external repository such as Artifactory. Specify the URL to each file including the file name; for example:

    https://artifactory.com/myrepo/getHostnameAPI.bar

    https://artifactory.com/myrepo/CustomerDatabaseV1.bar

    Tip: If you are using GitHub as an external repository, you must specify the raw URL.

    The BAR files that you specify must all be accessible by using the same authentication credentials; for example, basic authentication or no authentication. You must define these credentials by creating a configuration of type BarAuth and then add the configuration name to spec.configurations. For information about the supported authentication credentials and details of how to create this configuration object, see BarAuth type (or BarAuth type if using the Dashboard) and Configuration reference.

If you want to use a custom server runtime image to deploy an integration runtime, use spec.template.spec.containers.image to specify this image.

spec.configurations[]

An array of existing configurations that you want to apply to one or more BAR files being deployed. These configurations must be in the same namespace as the integration runtime. To specify a configuration, use the metadata.name value that was specified while creating that configuration.

For information about creating configurations, see Configuration reference. To see examples of how to specify one or more values for spec.configurations, see Creating an instance from the Red Hat OpenShift web console and Creating an instance from the Red Hat OpenShift or Kubernetes CLI.

spec.dashboardUsers.bypassGenerate

Indicates whether to bypass the generation of users when not using the App Connect Dashboard.

Valid values are true and false.

  • Set this value to false if you want to use the Dashboard to deploy BAR files from the content server or an external repository, and want to ensure that web users (user IDs) are created on the integration runtime to allow the Dashboard to connect to it with the appropriate permissions. (Automatically creates web users with admin permissions, read-only permissions, and no access, and generates a secret named integrationRuntimeName-ir to store data about these web users.)
  • Set this value to true if you want to bypass the generation of users because you intend to deploy BAR files from an external repository without using the Dashboard.
false

spec.defaultAppName

A name for the default application for the deployment of independent resources.

DefaultApplication

spec.defaultNetworkPolicy.enabled

(Only applicable if spec.version resolves to 12.0.7.0-r2 or later)

Indicate whether to enable the creation of a default network policy that restricts traffic to the integration runtime pods.

Valid values are true and false.

  • Defaults to true, which automatically creates a default network policy.
  • Set this value to false to disable the creation of a default network policy so that you can create a custom network policy that defines which connections are allowed.

For more information, see About network policy in the Red Hat OpenShift documentation, or Network Policies in the Kubernetes documentation.

true

spec.desiredRunState

(Only applicable if spec.version resolves to 13.0.2.2-r1 or later)

Specify whether to stop or start the integration runtime. The stop or start action is applied to all replica pods that are provisioned for the integration runtime to ensure that a consistent state is maintained across the pods.

Valid values are as follows:
  • stopped: Stops the integration runtime if running, scales down its replica pods to zero (0), and changes the state to Stopped. (The original number of replicas is retained in spec.replicas.)

    You can choose to create your integration runtime in a Stopped state if preferred, or you can edit the settings of your running integration runtime to stop it.

  • running: Starts the integration runtime when in a Stopped state, starts the original number of replica pods (as defined in spec.replicas), and changes the state to Ready.

If you omit this parameter or leave its value blank, a default value of running is assumed.

You can also stop or start the integration runtime from its tile on the Runtimes page in your App Connect Dashboard instance. For more information, see Stopping and starting a deployed integration runtime.

running

spec.flowType.designerAPIFlow

Indicate whether to enable the runtime for API flows that are authored in App Connect Designer.

Valid values are true and false.

Note:
  • If you are deploying one or more BAR files that are stored in the content server or in an external repository, you do not need to enable this setting because the BAR files will be automatically inspected to identify the flow types so that the appropriate containers can be created during deployment.
  • If you want to deploy a custom server runtime image that contains all the configuration for the integration runtime, including all the BAR files or applications that are required, you must enable one or more of the spec.flowType.* settings based on the type of flows that are built in the image. For example, if the BAR files in the image contain event-driven flows and Toolkit flows, you must enable both spec.flowType.designerEventFlow and spec.flowType.toolkitFlow. Or if the BAR files contain only API flows, enable only spec.flowType.designerAPIFlow.

false

spec.flowType.designerEventFlow

Indicate whether to enable the runtime for event-driven flows that are authored in App Connect Designer.

Valid values are true and false.

Note:
  • If you are deploying one or more BAR files that are stored in the content server or in an external repository, you do not need to enable this setting because the BAR files will be automatically inspected to identify the flow types so that the appropriate containers can be created during deployment.
  • If you want to deploy a custom server runtime image that contains all the configuration for the integration runtime, including all the BAR files or applications that are required, you must enable one or more of the spec.flowType.* settings based on the type of flows that are built in the image. For example, if the BAR files in the image contain event-driven flows and Toolkit flows, you must enable both spec.flowType.designerEventFlow and spec.flowType.toolkitFlow. Or if the BAR files contain only API flows, enable only spec.flowType.designerAPIFlow.

false

spec.flowType.toolkitFlow

Indicate whether to enable the runtime for flows that are built in the IBM App Connect Enterprise Toolkit.

Valid values are true and false.

Note:
  • If you are deploying one or more BAR files that are stored in the content server or in an external repository, you do not need to enable this setting because the BAR files will be automatically inspected to identify the flow types so that the appropriate containers can be created during deployment.
  • If you want to deploy a custom server runtime image that contains all the configuration for the integration runtime, including all the BAR files or applications that are required, you must enable one or more of the spec.flowType.* settings based on the type of flows that are built in the image. For example, if the BAR files in the image contain event-driven flows and Toolkit flows, you must enable both spec.flowType.designerEventFlow and spec.flowType.toolkitFlow. Or if the BAR files contain only API flows, enable only spec.flowType.designerAPIFlow.

false

spec.forceFlowBasicAuth.enabled

Indicate whether to enable basic authentication on all HTTP input flows.

Valid values are true and false.

Tip: Applicable to 12.0.8.0-r2 or later: After you deploy the integration runtime, you can alternatively enable or disable basic authentication as follows:
  1. From the Runtimes page in the App Connect Dashboard, click the integration runtime tile to view additional details for the deployed integration.
  2. Click the Security tab.
  3. To enable basic authentication, set the Basic authentication switch to on. (This action is equivalent to using the spec.forceFlowBasicAuth.enabled field.)

    Generated credentials for a username, password, and HTTPS basic authentication header are displayed. (This automated action is equivalent to using the spec.forceFlowBasicAuth.secretName field.) You can also regenerate the credentials.

  4. To disable basic authentication, set the Basic authentication switch to off.
  • Security tab for a deployed integration runtime

For more information, see Creating an integration runtime to run your BAR file resources.

false

spec.forceFlowBasicAuth.secretName

Specify the name of a manually created secret that stores the basic authentication credentials (that is, a username and password) to use on all HTTP input flows. Alternatively, omit this setting to use an automatically generated secret. This secret is required if spec.forceFlowBasicAuth.enabled is set to true.

If you do not specify a value, a secret named integrationRuntimeName-ingress-basic-auth is automatically generated, where integrationRuntimeName is the metadata.name value of the integration runtime.

If you want to provide your own secret, you must create it in the namespace where the integration runtime will be deployed. You can do so from the Red Hat OpenShift web console, or from the Red Hat OpenShift or Kubernetes CLI.

  • You can use the following Secret (YAML) resource to create the secret from the web console (by using the Import YAML icon Import YAML icon) or from the CLI (by running oc apply -f resourceBAuthFile.yaml or kubectl apply -f resourceBAuthFile.yaml).
    apiVersion: v1
    kind: Secret
    metadata:
      name: secretName
      namespace: namespaceName
    data:
      configuration: Base64-encoded_basicAuthCredentials
    type: Opaque

    To obtain the value of Base64-encoded_basicAuthCredentials, you can use (operating system-specific) commands such as base64, echo, or certutil to Base64 encode the username and password in a file or string. For example, run the following command on Linux to directly Base64-encocde the username and password from the command line:

    echo -n 'local::basicAuthOverride basicAuth_username basicAuth_password' | base64
  • Alternatively, create the secret by running the following command from the required namespace:
    OpenShift-only content
    oc create secret generic secretName --from-literal=configuration=Base64-encoded_basicAuthCredentials
    Kubernetes-only content
    kubectl create secret generic secretName --from-literal=configuration=Base64-encoded_basicAuthCredentials

spec.forceFlowsHTTPS.enabled

Indicate whether to force all HTTP Input nodes and SOAP Input nodes in all deployed flows (including their usage for inbound connections to applications, REST APIs, and integration services) in the integration runtime to use Transport Layer Security (TLS).

Valid values are true and false.

When spec.forceFlowsHTTPS.enabled is set to true, you must also ensure that spec.restApiHTTPS.enabled is set to true.

false

spec.forceFlowsHTTPS.secretName

Specify the name of a secret that stores a user-supplied public certificate/private key pair to use for enforcing TLS. (You can use tools such as keytool or OpenSSL to generate the certificate and key if required, but do not need to apply password protection.)

This secret is required if spec.forceFlowsHTTPS.enabled is set to true.

You must create the secret in the namespace where the integration runtime will be deployed, and can do so from the Red Hat OpenShift web console, or from the Red Hat OpenShift or Kubernetes CLI. Use your preferred method to create the secret. For example, you can use the following Secret (YAML) resource to create the secret from the web console (by using the Import YAML icon Import YAML icon) or from the CLI (by running oc apply -f resourceFile.yaml or kubectl apply -f resourceFile.yaml):

apiVersion: v1
kind: Secret
metadata:
  name: secretName
  namespace: namespaceName
data:
  tls.crt: "base64Encoded_crt_publicCertificate"
  tls.key: "base64Encoded_key_privateKey"
type: kubernetes.io/tls
Or you can create the secret by running the following command from the required namespace:OpenShift-only content
oc create secret tls secretName --key filename.key --cert filename.crt
Kubernetes-only content
kubectl create secret tls secretName --key filename.key --cert filename.crt
Note:

When you create the integration runtime, the IBM App Connect Operator checks for the certificate and key in the secret and adds them to a generated keystore that is protected with a password. The endpoint of the deployed integration is then secured with this certificate and key. If the secret can't be found in the namespace, the integration runtime will fail after 10 minutes.

If you need to update the certificate and key that are stored in the secret, you can edit the Secret resource to update the tls.crt and tls.key values. When you save, the keystore is regenerated and used by the integration runtime without the need for a restart.

spec.ingress.enabled

(Only applicable in an IBM Cloud Kubernetes Service environment and if spec.version resolves to 13.0.2.1-r1 or later)

Indicate whether to automatically create ingress resources for your deployed integration runtime. In Kubernetes environments, ingress resources are required to expose your integration runtimes to external traffic.

Valid values are true and false.

false

spec.ingress.domain

(Only applicable in an IBM Cloud Kubernetes Service environment and if spec.version resolves to 13.0.2.1-r1 or later)

If you do not want the ingress routes for the integration runtime to be constructed with the IBM-provided ingress subdomain of your IBM Cloud Kubernetes Service cluster, specify a preferred custom subdomain that is created in the cluster.

This parameter is applicable only if spec.ingress.enabled is set to true.

spec.java.version

(Only applicable if spec.version resolves to 13.0.5.0-r1 or later)

Specify which Java Runtime Environment (JRE) version to use in the deployment and the runtime pods for the integration runtime.

Limitations apply to some message flow nodes and capabilities, and to some types of security configuration when certain JRE versions are used with IBM App Connect Enterprise. Use this parameter to specify a Java version that is compatible with the content of the Toolkit integration that is deployed as an integration runtime. Applications that are deployed to an integration runtime might fail to start if the Java version is incompatible.

Valid values are 8 or 17. If you do not specify a value, the default value is used.

Example:

spec:
  java:
    version: '8'

17

spec.license.accept

An indication of whether the license should be accepted.

Valid values are true and false. To install, this value must be set to true.

false

spec.license.license

See Licensing reference for IBM App Connect Operator for the valid values.

spec.license.use

See Licensing reference for IBM App Connect Operator for the valid values.

For more information about specifying this value on Kubernetes, see Using Cloud Pak for Integration licenses with integration runtimes in a Kubernetes environment.

spec.logFormat

The format used for the container logs that are output to the container's console.

Valid values are basic and json.

basic

spec.metrics.disabled

(Only applicable if spec.version resolves to 12.0.10.0-r2 or later)

Indicate whether to disable the generation of message flow statistics, accounting data, and resource statistics for the deployed integration.

Valid values are true and false.

  • Set this value to true to stop the metrics from being generated by default for the deployed integration.
  • Set this value to false to enable the generation of these metrics for the deployed integration in the Dashboard UI. When enabled, you can view the metrics at a pod-specific level as described in Viewing detailed information about a deployed integration runtime.

If you are working in a Kubernetes environment, ensure that spec.metrics.disabled is set to true because metrics are not provided out of the box on Kubernetes environments.

false

spec.nodePortService.ports[].name

(Only applicable if spec.version resolves to 13.0.3.0-r1 or later)

The name of a port definition on a dedicated NodePort service that is created to expose the integration runtime pods to external (as well as internal) traffic. The name must contain only lowercase alphanumeric characters and a hyphen (-), and begin and end with an alphanumeric character.

If you need to expose more than one port for the NodePort service, you can use the collection of spec.nodePortService.ports[].* parameters to configure multiple port definitions as an array.

When you specify the set of spec.nodePortService.ports[].* parameters, the IBM App Connect Operator automatically creates a Kubernetes service with the defined ports and with a virtual IP address for access. The NodePort service is created in the same namespace as the integration runtime and is named in the format integrationRuntimeName-np-ir. For example, if the metadata.name value for the integration runtime is ir-toolkit, the NodePort service is named ir-toolkit-np-ir.

The following example shows the set of fields in a port definition:

spec:
  nodePortService:
    ports:
      - name: port-tcp
        protocol: TCP
        port: 9910
        nodePort: 32000
        targetPort: 9910
Note: The spec.nodePortService.ports[].* parameters are an enhancement to using these options to configure port definitions:
  • Using the spec.service.ports[].* parameters to modify the default integrationRuntimeName-ir service to expose the pods to external traffic
  • Manually creating and managing a separate Kubernetes NodePort service

This enhancement simplifies the process of managing external access to TCP/IP Server nodes in Toolkit flows because only the specified TCP/IP ports are exposed externally. Internal ports are not affected and no services need to be manually managed.

Applicable for integration runtimes at version 13.0.3.1-r1 or later, running on App Connect Operator 12.12.0 or later:
  • When you use the spec.nodePortService.ports[].* parameters to specify port definitions and automatically create a dedicated NodePort service, the Operator also automatically updates the relevant network policies to allow traffic to the newly exposed ports.
  • If you remove your existing spec.nodePortService.ports[].* parameters from the integration runtime CR, the Operator automatically deletes the dedicated NodePort service that it created when you first added these parameters to the CR. This eliminates the need for any manual cleanup.

spec.nodePortService.ports[].nodePort

(Only applicable if spec.version resolves to 13.0.3.0-r1 or later)

The port on which each node in the cluster listens for incoming requests for the NodePort service. The incoming traffic is routed to the corresponding pods.

The port number must be in the range 30000 to 32767.

Ensure that this port is not being used by another service. You can check which node ports are already in use by running the following command and then checking under the PORT(S) column in the output:

OpenShift-only content
oc get svc -n namespaceName
Kubernetes-only content
kubectl get svc -n namespaceName

You can either specify a specific port number, or omit spec.nodePortService.ports[].nodePort to automatically assign an available port in the 30000-32767 range. To find the port that is automatically assigned, run the oc describe service or kubectl describe service command (as shown in the following example), or run the oc get service or kubectl get service command, where integrationRuntimeName-np-ir is the name of the NodePort service.

OpenShift-only content
oc describe service integrationRuntimeName-np-ir
Kubernetes-only content
kubectl describe service integrationRuntimeName-np-ir
Applicable for integration runtimes at version 13.0.3.1-r1 or later, running on App Connect Operator 12.12.0 or later:

If you choose to automatically assign a port number (by omitting the spec.nodePortService.ports[].nodePort setting), the Operator automatically updates the integration runtime CR with a status.nodePortService.ports.nodePort setting that exposes the assigned port number. For example:

status:
  nodePortService:
    ports:
      - name: tcpip-server
        protocol: TCP
        port: 9910
        nodePort: 30001

spec.nodePortService.ports[].port

(Only applicable if spec.version resolves to 13.0.3.0-r1 or later)

The port that exposes the NodePort service to pods within the cluster.

spec.nodePortService.ports[].protocol

(Only applicable if spec.version resolves to 13.0.3.0-r1 or later)

The protocol of the port. Valid values are TCP, SCTP, and UDP.

spec.nodePortService.ports[].targetPort

(Only applicable if spec.version resolves to 13.0.3.0-r1 or later)

The port on which the pods listen for connections from the NodePort service.

The port number must be in the range 1 to 65535.

spec.replicas

The number of replica pods to run for each deployment.

Increasing the number of replicas will proportionally increase the resource requirements.

1

spec.restApiHTTPS.enabled

Indicate whether to enable HTTPS-based REST API flows, which in turn ensures that the correct endpoint is configured for the deployed integration.

Valid values are true and false.

  • Set the value to true to indicate that you are using HTTPS-based REST API flows (with TLS configured). You must have configured all HTTP Input nodes and SOAP Input nodes in all flows in the integration to use TLS either by setting spec.forceFlowsHTTPS.enabled to true or by using mechanisms such as the server.conf.yaml file while developing the flows in IBM App Connect Enterprise. When set to true (with the prerequisite configuration), the endpoint for the deployed integration is secured with https. The true setting uses port 7843 by default.
  • Set the value to false if you are not using HTTPS-based REST API flows. When set to false, the endpoint is configured as http. The false setting uses port 7800 by default.

If using a Kubernetes environment other than IBM Cloud Kubernetes Service, this setting is ignored. Instead, the protocol that is defined in the ingress definition for this integration runtime, which you will need to create later, will be used. For more information, see Manually creating ingress definitions for external access to your IBM App Connect instances in Kubernetes environments. In an IBM Cloud Kubernetes Service cluster, this setting is applicable only for a 13.0.2.1-r1 or later integration runtime if Ingress/Enabled is set to true to automatically create ingress resources for the deployed integration runtime. For more information, see Automatically creating ingress definitions for external access to your IBM App Connect instances on IBM Cloud Kubernetes Service.

false

spec.routes.disabled

OpenShift-only contentIndicate whether to disable the automatic creation of HTTP and HTTPS OpenShift routes, which externally expose a service that identifies the set of integration runtime pods.

Valid values are true and false.

  • Set the value to false (the default) to enable the automatic creation of routes.
  • Set the value to true to disable the automatic creation of routes.

false

spec.routes.domain

OpenShift-only content An alias or DNS that can optionally be used to point to the endpoint for HTTPS flows in the integration runtime. This value must conform to the DNS 952 subdomain conventions. If unspecified, a hostname is automatically generated for the HTTPS route.

The path is constructed as follows, where CRname is the metadata.name value, CRnamespace is the metadata.namespace value, and specRoutesDomainValue is the spec.routes.domain value:

CRname-https-CRnamespace.specRoutesDomainValue

After a route is created, the spec.routes.domain setting cannot be changed.

Note:

On OpenShift, routers will typically use the oldest route with a given host when resolving conflicts.

spec.routes.metadata.annotations

OpenShift-only content Specify one or more custom annotations (as arbitrary metadata) to apply to the routes that are created during deployment. Specify each annotation as a name/value pair in the format key: value. For example:

spec:
  routes:
    metadata:
      annotations:
        key1: value1
        key2: value2

spec.routes.timeout

OpenShift-only content The timeout value (in seconds) on the OpenShift routes.

spec.serverless.knativeService.enabled

Indicate whether to enable Knative service to provide support for serverless deployments.

Valid values are true and false. Set this value to true to enable a serverless Knative service deployment.

Restriction: When set to true, the following restrictions apply for the integration runtime:
  • Only the following licenses (set in spec.license.use) are valid: AppConnectEnterpriseNonProductionFREE and CloudPakForIntegrationNonProductionFREE.
  • Support for BAR files is restricted to those that contain only API flows from App Connect Designer. (BAR files that contain event-driven flows or Toolkit integrations are not supported.)

false

spec.serverless.knativeService.imagePullSecrets.name

The name of a secret that contains the credentials for pulling images from the registry where they are stored.

For more information, see Deploying images from a private container registry.

spec.serverless.knativeService.template.spec.containerConcurrency

The maximum number of concurrent requests that are allowed per container of the revision. The default of 0 (zero) indicates that concurrency to the application is not limited, and the system decides the target concurrency for the autoscaler.

0

spec.serverless.knativeService.template.spec.containers.image

The path for the image to run in this container, including the tag.

spec.serverless.knativeService.template.spec.containers.imagePullPolicy

Indicate whether you want images to be pulled every time, never, or only if they're not present.

Valid values are Always, Never, and IfNotPresent.

IfNotPresent

spec.serverless.knativeService.template.spec.containers.name

The name of the container to be configured.

spec.serverless.knativeService.template.spec.priority

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

Pod priority settings control which pods get killed, rescheduled, or started to allow the most important pods to keep running.

spec.serverless.knativeService.template.spec.priority specifies an integer value, which various system components use to identify the priority of the (serverless) integration runtime pod. The higher the value, the higher the priority.

If the priority admission controller is enabled, you cannot manually specify a priority value because the admission controller automatically uses the spec.serverless.knativeService.template.spec.priorityClassName setting to populate this field with a resolved value.

spec.serverless.knativeService.template.spec.priorityClassName

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

A priority class name that maps to the integer value of a pod priority in spec.serverless.knativeService.template.spec.priority. If specified, this class name indicates the pod's priority (or importance) relative to other pods.

Valid values are as follows:

  • Specify either of these built-in Kubernetes classes, which mark a pod as critical and indicate the highest priorities: system-node-critical or system-cluster-critical. system-node-critical is the highest available priority.
  • To specify any other priority class, create a PriorityClass object with that class name. For more information, see Pod Priority and Preemption in the Kubernetes documentation.

If you do not specify a class name, the priority is set as follows:

  • The class name of an identified PriorityClass object, which has its globalDefault field set to true, is assigned.
  • If there is no PriorityClass object with a globalDefault setting of true, the priority of the pod is set to 0 (zero).

spec.serverless.knativeService.template.spec.timeoutSeconds

The maximum duration in seconds that the request routing layer will wait for a request delivered to a container to begin replying (send network traffic). If unspecified, a system default will be provided.

spec.serverless.knativeService.template.spec.tolerations[].effect

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

To prevent pods from being scheduled onto inappropriate nodes, use taints together with tolerations. Tolerations allow scheduling, but don't guarantee scheduling because the scheduler also evaluates other parameters as part of its function. Apply one or more taints to a node (by running oc taint or kubectl taint with a key, value, and taint effect) to indicate that the node should repel any pods that do not tolerate the taints. Then, apply toleration settings (effect, key, operator, toleration period, and value) to a pod to allow it to be scheduled on the node if the pod's toleration matches the node's taint. For more information, see Taints and Tolerations in the Kubernetes documentation.

If you need to specify one or more tolerations for a serverless integration runtime pod, you can use the collection of spec.serverless.knativeService.template.spec.tolerations[].* parameters to define an array.

For spec.serverless.knativeService.template.spec.tolerations[].effect, specify the taint effect that the toleration should match. (The taint effect on a node determines how that node reacts to a pod that is not configured with appropriate tolerations.) Leave the effect empty to match all taint effects. Alternatively, specify one of these values: NoSchedule, PreferNoSchedule, or NoExecute.

spec.serverless.knativeService.template.spec.tolerations[].key

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

Specify the taint key that the toleration applies to. Leave the key empty and set spec.serverless.knativeService.template.spec.tolerations[].operator to Exists to match all taint keys, values, and effects.

spec.serverless.knativeService.template.spec.tolerations[].operator

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

Specify an operator that represents a key's relationship to the value in spec.serverless.knativeService.template.spec.tolerations[].value. Valid operators are Exists and Equal. Exists is equivalent to a wildcard for the toleration value, and indicates that the pod can tolerate all taints of a particular category.

Equal

spec.serverless.knativeService.template.spec.tolerations[].tolerationSeconds

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

Optionally specify a period of time in seconds that determines how long the pod stays bound to a node with a matching taint before being evicted. Applicable only for a toleration with a NoExecute effect (which indicates that a pod without the appropriate toleration should immediately be evicted from the node).

By default, no value is set, which means that a pod that tolerates the taint will never be evicted. Zero and negative values are treated as 0 (evict immediately).

spec.serverless.knativeService.template.spec.tolerations[].value

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

Specify the taint value that the toleration matches to. If the operator is Exists, leave this value empty.

spec.service.ports[].name

The name of a port definition on the service (defined by spec.service.type), which is created for accessing the set of pods. The name must contain only lowercase alphanumeric characters and a hyphen (-), and begin and end with an alphanumeric character.

If you need to expose more than one port for the service, you can use the collection of spec.service.ports[].* parameters to configure multiple port definitions as an array.

spec.service.ports[].nodePort

The port on which each node listens for incoming requests for the service. Applicable when spec.service.type is set to NodePort to expose the service externally.

The port number must be in the range 30000 to 32767.

Ensure that this port is not being used by another service. You can check which node ports are already in use by running the following command and then checking under the PORT(S) column in the output:

OpenShift-only content
oc get svc -n namespaceName
Kubernetes-only content
kubectl get svc -n namespaceName

spec.service.ports[].port

The port that exposes the service to pods within the cluster.

spec.service.ports[].protocol

The protocol of the port. Valid values are TCP, SCTP, and UDP.

spec.service.ports[].targetPort

The port on which the pods will listen for connections from the service.

The port number must be in the range 1 to 65535.

spec.service.type

The type of service to create for accessing the set of pods:

  • ClusterIP: Specify this value to expose the service internally, for access by applications inside the cluster. This value is the default.
  • NodePort: Specify this value to expose the service at a static port (specified by using spec.service.ports[].nodePort), for external traffic. If you want to use NodePort, you must work with your cluster administrator to configure external access to the cluster. For more information, see Configuring ingress cluster traffic using a NodePort in the OpenShift documentation.

    The following example shows a custom port definition for external traffic:

    spec:
      service:
        ports:
          - name: config-abc
            nodePort: 32000
            port: 9910
            protocol: TCP
            targetPort: 9920
        type: NodePort
    Note:
    When you set the service type to NodePort, the existing default ports of 7800, 7843, and 7600 (which are used for the http and https transports, and the administration REST API) are automatically assigned NodePort values on the service. If you try to manually specify node ports for these default ports by adding a spec.service.ports[].* array in the integration runtime CR, you will obtain an error. To set node ports for the default ports, all you need to do is set spec.service.type to NodePort and then omit the spec.service.ports[].* section, as shown in the following example.
    spec:
      service:
        type: NodePort

    To identify the NodePort values that are automatically assigned to the default ports, you can run the oc describe service or kubectl describe service command, or the oc get service or kubectl get service command. In the following example, integrationRuntimeName-ir represents the service name, where integrationRuntimeName is the metadata.name value for the integration runtime.

    OpenShift-only content
    oc describe service integrationRuntimeName-ir
    Kubernetes-only content
    kubectl describe service integrationRuntimeName-ir
Tip: Instead of using the spec.service.ports[].* parameters to modify the default integrationRuntimeName-ir service to expose ports externally, you can use the spec.nodePortService.ports[].* parameters to set up port definitions for accessing the pods. When you specify spec.nodePortService.ports[].* settings, the IBM App Connect Operator automatically creates a dedicated NodePort service with your defined ports to expose the integration runtime pods to external traffic.

ClusterIP

spec.startupResources.*

(Only applicable if spec.version resolves to 13.0.5.1-r1 or later)

Specify CPU resources for the startup phase of the integration runtime to optimize container startup performance without increasing licensing costs. This feature allows you to allocate higher CPU resources during the container startup phase and then dynamically reduce them to lower values for the running (licensed) phase. For example, you might want to improve the initialization time for an integration runtime that contains a large number of flows. During startup, the IBM License Service does not count the container as running, so no VPC usage occurs until the pod is fully ready.

You can specify the following parameters:

  • spec.startupResources.limits.cpu: The upper limit of CPU cores that are allocated for the startup phase. Specify an integer, a fractional (for example, 0.5), or a millicore value (for example, 100m, equivalent to 0.1 core).
  • spec.startupResources.requests.cpu: The minimum number of CPU cores that are allocated for the startup phase. Specify an integer, a fractional (for example, 0.5), or a millicore value (for example, 100m, equivalent to 0.1 core).
Restriction:
  • This capability is available only in the following environments:
    • Kubernetes 1.33.0 or later

      The capability uses the Kubernetes in-place pod resize feature, which is enabled by default on most Kubernetes distributions, including major software-as-a-service providers like IBM Cloud, Microsoft Azure, and Amazon Web Services (AWS).

    • Red Hat OpenShift 4.20 or later

      The Kubernetes in-place pod resize feature is also enabled by default on these Red Hat OpenShift versions.

  • To resize CPU requests, you must also set CPU limits.
  • You cannot specify different values for the requests and limits in spec.startupResources and then specify the same value in the normal running values in spec.template.spec.containers[].resources.*. Either keep the values the same in both or keep them different in both.

The following example allocates 2000m CPU during container creation. After the App Connect processes are initialized, the IBM App Connect Operator automatically scales down the pod to the default or custom CPU values that are specified in spec.template.spec.containers[].resources.limits.cpu and spec.template.spec.containers[].resources.requests.cpu for the runtime container.

spec:
  startupResources:
    limits:
      cpu: 2000m
    requests:
      cpu: 2000m

For more information, see Dynamic CPU Allocation for Faster Startup in IntegrationRuntimes.

spec.strategy.type

The update strategy for replacing old pods on always-on deployments:

  • Recreate: Specify this value to terminate all of the existing pods before creating new ones.
  • RollingUpdate: Specify this value to allow your deployment's pod template to be updated with no downtime by incrementally replacing the pods with new ones. This value is the default.

RollingUpdate

spec.telemetry.tracing.openTelemetry.enabled

Indicate whether to enable OpenTelemetry tracing for all message flows in a Toolkit integration that you want to deploy to the integration runtime.

Valid values are true and false.

Note: If you are using Instana as your Observability agent or backend system, setting this value to true is typically the only configuration needed for OpenTelemetry tracing.

false

spec.telemetry.tracing.openTelemetry.endpoint

An endpoint of the OpenTelemetry agent that will receive the OpenTelemetry span data. The default is status.hostIP:4317, where status.hostIP represents the primary IP address of the node that the pod is assigned to.

  • For grpc, you can specify a preferred endpoint to export the OpenTelemetry span data to by using the format host_or_IPaddress:port; for example, 192.0.2.0:4317 or telemetryagent.host.com:4317.
  • For http/json, you can specify an HTTP/JSON URL to export the OpenTelemetry span data to; for example, http://hostname:port/v1/traces or http://localhost:4318.
Note:

In an Cloud Pak for Integration environment with Instana configured, one Instana agent typically runs on each physical worker node in the cluster by default. In this scenario, it is advisable to leave spec.telemetry.tracing.openTelemetry.endpoint unspecified when OpenTelemetry tracing is enabled. This blank setting results in the container being configured to use the Instana agent that is on the physical worker where the container is started. (In most cases, the agent will be available locally on the worker node where App Connect is running.) If preferred, you can specify a different IP address and port for the agent (on a different physical worker node) in spec.telemetry.tracing.openTelemetry.endpoint.

status.hostIP:4317

spec.telemetry.tracing.openTelemetry.protocol

An OpenTelemetry Protocol (OTLP) to use to send data to your Observability system:

  • grpc: Choose this option for protobuf-encoded data using gRPC Remote Procedure Call (gRPC) wire format over an HTTP/2 connection.
  • http/json: Choose this option for JSON-encoded data over an HTTP connection.

grpc

spec.telemetry.tracing.openTelemetry.timeout

Specify a connection timeout period in seconds.

10s

spec.telemetry.tracing.openTelemetry.tls.caCertificate

For secure communication, specify the name of the secret key that contains the certificate authority (CA) public certificate.

ca.crt

spec.telemetry.tracing.openTelemetry.tls.secretName

For secure communication, specify the name of the secret that contains the certificate authority (CA) public certificate.

spec.template.spec.affinity

Specify custom affinity settings that control the placement of pods on nodes.

The default settings allow the pod to be placed on a supported platform and, where possible, spread the replicas across hosts. The default settings are as follows. Note that the labelSelector entries are automatically generated.

spec:
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
                      - s390x
                      - ppc64le
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    <copy of the pod labels>
                topologyKey: kubernetes.io/hostname

You can overwrite the default settings for spec.template.spec.affinity.nodeAffinity or spec.template.spec.affinity.podAntiAffinity with custom settings.

Note: The default affinity settings are available for integration runtimes at version 12.0.12.5-r1 or later.

Custom settings are supported for nodeAffinity, podAffinity, and podAntiAffinity.

For more information about spec.template.spec.affinity.nodeAffinity definitions, see Controlling pod placement on nodes using node affinity rules and Placing pods relative to other pods using affinity and anti-affinity rules in the OpenShift documentation, and Assigning Pods to Nodes in the Kubernetes documentation.

spec.template.spec.containers[].env

Define custom environment variables that you want to set for a specific App Connect container in the deployment by directly specifying a name and value for each environment variable. For example, you can set a container's timezone by declaring a TZ environment variable with a value that is set to a valid TZ identifier (such as Africa/Abidjan).

The spec.template.spec.containers[].env parameter exposes the Kubernetes API for declaring environment variables in the container, and as such follows the same schema. For any named container, you can set one or more environment variables in either of the following ways:

  • Add a name/value pair for each environment variable.
  • Specify an environment variable name and then inject the environment variable's value into the running container from one of these sources: a ConfigMap key, a pod-level field, a container resource (supports limits and requests), or the key of a secret in the pod's namespace. You can optionally indicate whether the ConfigMap or its key, or the secret, must be defined.

The following example specifies name/value pairs for two environment variables for the runtime container (and also includes default resource settings).

spec:
  template:
    spec:
      containers:
        - name: runtime
          env:
            - name: MY_CUSTOM_ENV_VAR
              value: 'true'
            - name: ANOTHER_ENV_VAR
              value: '100'
          resources:
            requests:
              cpu: 300m
              memory: 368Mi
          ...

The following example shows the fields that you can complete to specify a custom environment variable name for the runtime container and to inject an environment variable value from a ConfigMap key, a pod field, a container resource, or the key of a secret in the pod's namespace.

spec:
  template:
    spec:
      containers:
        - name: runtime
          env:
            - name: ENVIRONMENT_VARIABLE_NAME_01
              valueFrom:
                configMapKeyRef:
                  name: CONFIGMAP_NAME
                  key: CONFIGMAP_KEY
                  optional: true_OR_false
                fieldRef:
                  fieldPath: PATH_OF_FIELD_TO_SELECT_IN_THE_SPECIFIED_API_VERSION
                  apiVersion: VERSION_OF_SCHEMA FOR_THE_fieldPath
                resourceFieldRef:
                  containerName: CONTAINER_NAME
                  resource: CONTAINER_RESOURCE_NAME
                secretKeyRef:
                  name: SECRET_NAME
                  key: VALID_SECRET_KEY
                  optional: true_OR_false

You can also set environment variables by using the spec.template.spec.containers[].envFrom parameter.

spec.template.spec.containers[].envFrom

(Only applicable if spec.version resolves to 12.0.12.4-r1 or later)

Define custom environment variables that you want to set for a specific App Connect container in the deployment by referencing one or more ConfigMaps or secrets. All of the key/value pairs in a referenced ConfigMap or secret are set as environment variables for the named container.

The spec.template.spec.containers[].envFrom parameter exposes the Kubernetes API for declaring environment variables in the container, and as such follows the same schema. For any named container, you can set one or more environment variables by specifying an array of ConfigMaps or secrets whose environment variables you want to inject into the container. You can optionally specify a string value to prepend to each key in a ConfigMap and also indicate whether the ConfigMap or secret must be defined.

The following example shows the fields that you can complete to set environment variables for the runtime container.

spec:
  template:
    spec:
      containers:
        - name: runtime
          envFrom:
            - configMapRef:
                name: CONFIGMAP_NAME
                optional: true_OR_false
              secretRef:
                name: SECRET_NAME
                optional: true_OR_false
              prefix: CONFIGMAP_KEY_PREFIX

You can also set environment variables by using the spec.template.spec.containers[].env parameter.

spec.template.spec.containers[].name

The name of a container that is created in the pod when the integration runtime is deployed, and which you want to configure. The name that you specify must be valid for the type of integration being deployed:

  • runtime

    The runtime container is deployed to provide runtime support for Toolkit integrations or Designer integrations.

  • designerflows

    The designerflows container is deployed to provide support for connectors in Designer integrations and for API flows in Designer integrations.

  • designereventflows
  • proxy

    The designereventflows container and accompanying proxy container are deployed to provide support for event-driven flows in Designer integrations.

For each container array, you can configure your preferred custom settings for the spec.template.spec.containers[].image, spec.template.spec.containers[].imagePullPolicy, spec.template.spec.containers[].livenessProbe.*, spec.template.spec.containers[].readinessProbe.*, spec.template.spec.containers[].resources., and spec.template.spec.containers[].startupProbe.* parameters.

spec.template.spec.containers[].image

The name of the custom server image to use; for example, image-registry.openshift-image-registry.svc:5000/imageName. This image must be built from the version that is specified as the apiVersion value. Channels are not supported when custom images are used.

spec.template.spec.containers[].imagePullPolicy

Indicate whether you want images to be pulled every time, never, or only if they're not present.

Valid values are Always, Never, and IfNotPresent.

IfNotPresent

spec.template.spec.containers[].lifecycle.postStart.exec.command[]

(Only applicable if spec.version resolves to 12.0.7.0-r5 or later)

An array of (one or more) commands to execute immediately after a named container is created (or started).

The working directory for the command is the root ('/') in the container's file system. The command executes without being run in a shell, which means that traditional shell instructions ('|', etc) will not work. To use a shell, explicitly call out to that shell. An exit status of 0 (zero) indicates a live or healthy status, and a non-zero value indicates an unhealthy status.

For more information, see Container Lifecycle Hooks and Attach Handlers to Container Lifecycle Events in the Kubernetes documentation.

spec.template.spec.containers[].lifecycle.postStart.httpGet.host

(Only applicable if spec.version resolves to 12.0.7.0-r5 or later)

Specify the host name to connect to, to perform the HTTP request on the container pod immediately after it starts. Defaults to the pod IP. You can alternatively set Host in spec.pod.containers.runtime.lifecycle.postStart.httpGet.httpHeaders.

spec.template.spec.containers[].lifecycle.postStart.httpGet.httpHeaders

(Only applicable if spec.version resolves to 12.0.7.0-r5 or later)

Specify one or more custom headers to set in the HTTP request to be performed on the container pod immediately after it starts. For each header, specify a header field name and header field value in the following format:

spec:
  template:
    spec:
      containers:
          lifecycle:
            postStart:
              httpGet:
                httpHeaders:
                  - name: headerFieldName1
                    value: headerFieldValue1
                  - name: headerFieldName2
                    value: headerFieldValue2

spec.template.spec.containers[].lifecycle.postStart.httpGet.path

(Only applicable if spec.version resolves to 12.0.7.0-r5 or later)

Specify the path to access on the HTTP server when performing the HTTP request on the container pod immediately after it starts.

spec.template.spec.containers[].lifecycle.postStart.httpGet.scheme

(Only applicable if spec.version resolves to 12.0.7.0-r5 or later)

Specify the scheme to use for connecting to the host when performing the HTTP request on the runtime container pod immediately after it starts.

HTTP

spec.template.spec.containers[].lifecycle.preStop.exec.command[]

(Only applicable if spec.version resolves to 12.0.7.0-r5 or later)

An array of (one or more) commands to execute inside a named container before its pod is terminated.

Use the spec.template.spec.containers[].lifecycle.preStop.* settings to configure the lifecycle of the container to allow existing transactions to complete before the pod is terminated due to an API request or a management event (such as failure of a liveness or startup probe, or preemption). This allows rolling updates to occur without breaking transactions (unless they are long running). The countdown for the pod's termination grace period begins before the preStop hook is executed.

The working directory for the command is the root ('/') in the container's file system. The command executes without being run in a shell, which means that traditional shell instructions ('|', etc) will not work. To use a shell, explicitly call out to that shell. An exit status of 0 (zero) indicates a live or healthy status, and a non-zero value indicates an unhealthy status.

For example, this default preStop setting ensures that rolling updates do not result in lost messages on the runtime:

spec:
  template:
    spec:
      containers:
          lifecycle:
            preStop:
              exec:
                command:
                  - sh
                  - -c
                  - "sleep 5"

For more information, see Container Lifecycle Hooks and Attach Handlers to Container Lifecycle Events in the Kubernetes documentation.

spec.template.spec.containers[].lifecycle.preStop.httpGet.host

(Only applicable if spec.version resolves to 12.0.7.0-r5 or later)

Specify the host name to connect to, to perform the HTTP request on the container pod before its termination. Defaults to the pod IP. You can alternatively set Host in spec.pod.containers.runtime.lifecycle.preStop.httpGet.httpHeaders.

spec.template.spec.containers[].lifecycle.preStop.httpGet.httpHeaders

(Only applicable if spec.version resolves to 12.0.7.0-r5 or later)

Specify one or more custom headers to set in the HTTP request to be performed on the container pod before its termination. For each header, specify a header field name and header field value in the following format:

spec:
  template:
    spec:
      containers:
          lifecycle:
            preStop:
              httpGet:
                httpHeaders:
                  - name: headerFieldName1
                    value: headerFieldValue1
                  - name: headerFieldName2
                    value: headerFieldValue2

spec.template.spec.containers[].lifecycle.preStop.httpGet.path

(Only applicable if spec.version resolves to 12.0.7.0-r5 or later)

Specify the path to access on the HTTP server when performing the HTTP request on the container pod before its termination.

spec.template.spec.containers[].lifecycle.preStop.httpGet.scheme

(Only applicable if spec.version resolves to 12.0.7.0-r5 or later)

Specify the scheme to use for connecting to the host when performing the HTTP request on the container pod before its termination.

HTTP

spec.template.spec.containers[].livenessProbe.failureThreshold

The number of times the liveness probe can fail before taking an action to restart the container. (The liveness probe checks whether the container is still running or needs to be restarted.)

1

spec.template.spec.containers[].livenessProbe.initialDelaySeconds

How long to wait (in seconds) before performing the first probe to check whether the container is still running. Increase this value if your system cannot start the container in the default time period.

360

spec.template.spec.containers[].livenessProbe.periodSeconds

How often (in seconds) to perform a liveness probe to check whether the container is still running.

10

spec.template.spec.containers[].livenessProbe.timeoutSeconds

How long (in seconds) before the liveness probe (which checks whether the container is still running) times out.

5

spec.template.spec.containers[].readinessProbe.failureThreshold

The number of times the readiness probe can fail before taking an action to mark the pod as Unready. (The readiness probe checks whether the container is ready to accept traffic.)

1

spec.template.spec.containers[].readinessProbe.initialDelaySeconds

How long to wait (in seconds) before performing the first probe to check whether the container is ready.

10

spec.template.spec.containers[].readinessProbe.periodSeconds

How often (in seconds) to perform a readiness probe to check whether the container is ready.

5

spec.template.spec.containers[].readinessProbe.timeoutSeconds

How long (in seconds) before the readiness probe (which checks whether the container is ready) times out.

3

spec.template.spec.containers[].resources.limits.cpu

The upper limit of CPU cores that are allocated for running the container. Specify an integer, a fractional (for example, 0.5), or a millicore value (for example, 100m, equivalent to 0.1 core). For more information, see Resource Management for Pods and Containers in the Kubernetes documentation.

When you create an integration runtime, no CPU limits are set on the resource if spec.license.use is set to AppConnectEnterpriseNonProductionFREE or CloudPakForIntegrationNonProductionFREE. If required, either use spec.template.spec.containers[].resources.limits.cpu to set the CPU limits, or configure CPU limits for the namespace as described in Configure Memory and CPU Quotas for a Namespace in the Kubernetes documentation.

1

spec.template.spec.containers[].resources.limits.ephemeral-storage

The upper limit of local ephemeral storage (in bytes) that the container can consume in a pod. If a node fails, the data in ephemeral storage can be lost. Specify an integer or a fixed-point number with a suffix of E, P, T, G, M, k, or use the power-of-two equivalent Ei, Pi, Ti, Gi, Mi, Ki. For more information, see Local ephemeral storage in the Kubernetes documentation.

100Mi

spec.template.spec.containers[].resources.limits.memory

The memory upper limit (in bytes) that is allocated for running the container. Specify an integer with a suffix of E, P, T, G, M, K, or a power-of-two equivalent of Ei, Pi, Ti, Gi, Mi, Ki.

Applicable for the runtime container only:
  • For integration runtimes at version 13.0.5.1-r1 or earlier, if you set spec.template.spec.containers[].resources.requests.memory for the runtime container to a value that is higher than 1Gi, but set spec.template.spec.containers[].resources.limits.memory to a value that is lower than 1Gi, the integration runtime deployment fails. Ensure that the spec.template.spec.containers[].resources.limits.memory value is equal to or greater than the spec.template.spec.containers[].resources.requests.memory value.
  • For integration runtimes at version 13.0.5.2-r1 or later, if you set spec.template.spec.containers[].resources.requests.memory for the runtime container to a value that is higher than 1Gi, but set spec.template.spec.containers[].resources.limits.memory to a lower value, spec.template.spec.containers[].resources.limits.memory is automatically updated to the same value that you specified for spec.template.spec.containers[].resources.requests.memory.

1Gi

spec.template.spec.containers[].resources.requests.cpu

The minimum number of CPU cores that are allocated for running the container. Specify an integer, a fractional (for example, 0.5), or a millicore value (for example, 100m, equivalent to 0.1 core). For more information, see Resource Management for Pods and Containers in the Kubernetes documentation.

300m

spec.template.spec.containers[].resources.requests.ephemeral-storage

The minimum amount of local ephemeral storage (in bytes) that the container can consume in a pod. If a node fails, the data in ephemeral storage can be lost. Specify an integer or a fixed-point number with a suffix of E, P, T, G, M, k, or use the power-of-two equivalent Ei, Pi, Ti, Gi, Mi, Ki. For more information, see Local ephemeral storage in the Kubernetes documentation.

50Mi

spec.template.spec.containers[].resources.requests.memory

The minimum memory (in bytes) that is allocated for running the container. Specify an integer with a suffix of E, P, T, G, M, K, or a power-of-two equivalent of Ei, Pi, Ti, Gi, Mi, Ki.

256Mi

spec.template.spec.containers[].startupProbe.failureThreshold

The number of times the startup probe can fail before taking action. (The startup probe checks whether the application in the container has started. Liveness and readiness checks are disabled until the startup probe has succeeded.)

Note: If using startup probes, ensure that spec.template.spec.containers.livenessProbe.initialDelaySeconds and spec.template.spec.containers.readinessProbe.initialDelaySeconds are unset.

For more information about startup probes, see Protect slow starting containers with startup probes in the Kubernetes documentation.

120

spec.template.spec.containers[].startupProbe.initialDelaySeconds

How long to wait (in seconds) before performing the first probe to check whether the runtime application has started. Increase this value if your system cannot start the application in the default time period.

0

spec.template.spec.containers[].startupProbe.periodSeconds

How often (in seconds) to perform a startup probe to check whether the runtime application has started.

5

spec.template.spec.containers[].volumeMounts

Details of where to mount one or more named volumes into a container.

Follows the Volume Mount specification at https://pkg.go.dev/k8s.io/api@v0.20.3/core/v1#VolumeMount. For more information, see Volumes in the Kubernetes documentation.

The following volume mounts are blocked:
  • /home/aceuser/ excluding /home/aceuser/ace-server/log
  • /opt/ibm/ace-12/*   (Applicable to 12.0.1.0-r1 or later)

Specify custom settings for your volume mounts as an array.

Use spec.template.spec.containers[].volumeMounts with spec.template.spec.volumes. The spec.template.spec.containers[].volumeMounts.name value must match the name of a volume that is specified in spec.template.spec.volumes.name, as shown in the following example.
spec:
  template:
    spec:
      containers:
        - resources:
            requests:
              cpu: 300m
              memory: 368Mi
          volumeMounts:
            - name: volume-name1
              mountPath: /mypath1
              readOnly: true
              subPath: mysubdir1
              mountPropagation: HostToContainer
          name: runtime
        - volumeMounts:
            - name: volume-name2
              mountPath: /mypath2
              readOnly: true
              subPath: mysubdir2
              mountPropagation: HostToContainer
          name: designerflows
      volumes:
        - name: volume-name1
          ...
        - name: volume-name2
          ...

The following example illustrates how to add an empty directory (as a volume) to the /cache folder in an integration runtime's pod. The mount into the runtime container is read-write by default.

spec:
  template:
    spec:
      containers:
        - resources:
            requests:
              cpu: 300m
              memory: 368Mi
          volumeMounts:
            - name: cache-volume
              mountPath: /cache
          name: runtime
      volumes:
        - name: cache-volume
          emptyDir: {}
 

spec.template.spec.hostAliases.hostnames[]

One or more hostnames that you want to map to an IP address (as a host alias), to facilitate host name resolution. Use with spec.template.spec.hostAliases.ip.

Specify the hostname without an http:// or https:// prefix. The hostname can include only lowercase alphanumeric characters, hyphen (-) or period (.) symbols, and must start and end with an alphanumeric character; for example:

xyz.example.com

Each host alias is added as an entry to a pod's /etc/hosts file.

Example settings:

spec:
  template:
    spec:
      hostAliases:
        - hostnames:
            - somehostname.com
            - anotherhostname.com
          ip: 192.0.2.0

For more information about host aliases and the hosts file, see the Kubernetes documentation.

spec.template.spec.hostAliases.ip

An IP address that you want to map to one or more hostnames (as a host alias), to facilitate host name resolution. Use with spec.template.spec.hostAliases.hostnames[].

Each host alias is added as an entry to a pod's /etc/hosts file.

spec.template.spec.imagePullSecrets.name

The secret used for pulling images.

spec.template.spec.metadata.annotations

Specify one or more custom annotations (as arbitrary metadata) to apply to each pod that is created during deployment. Specify each annotation as a key/value pair in the format key: value. For example:

spec:
  template:
    spec:
      metadata:
        annotations:
          key1: value1
          key2: value2

For example, you can add a spec.template.spec.metadata.annotations.restart value to trigger a rolling restart of your integration runtime pods, as described in Restarting integration server or integration runtime pods.

The custom annotations that you specify will be merged with the default (generated) annotations. If duplicate annotation keys are detected, the custom value will overwrite the default value.

spec.template.spec.metadata.labels

Specify one or more custom labels (as classification metadata) to apply to each pod that is created during deployment. Specify each label as a key/value pair in the format labelKey: labelValue. For example:

spec:
  template:
    spec:
      metadata:
        labels:
          key1: value1
          key2: value2

The custom labels that you specify will be merged with the default (generated) labels. If duplicate label keys are detected, the custom value will overwrite the default value.

spec.template.spec.nodeSelector

Specify a set of key/value pairs that must be matched against the node labels to decide whether App Connect pods can be scheduled on that node. Only nodes matching all of these key/value pairs in their labels will be selected for scheduling App Connect pods. For more information, see nodeSelector and Assign Pods to Nodes in the Kubernetes documentation.

Example:
spec:
  template:
    spec:
      nodeSelector:
        Name_1: value1
        Name_2: value2
Restriction: Only applicable to always-on deployments; not supported for Knative.

spec.template.spec.podSecurityContext.fsGroup

(Only applicable if spec.version resolves to 12.0.10.0-r2 or later)

The spec.template.spec.podSecurityContext settings define privilege and access control settings for an integration runtime pod's containers and volumes when applicable. When set, this additional SecurityContext is copied into the runtime pod spec.

Specifies a special supplemental group that applies to all container processes in the integration runtime pod. Specify an integer value; for example, 2000.

spec.template.spec.podSecurityContext.fsGroupChangePolicy

(Only applicable if spec.version resolves to 12.0.10.0-r2 or later)

Defines the behavior for changing the ownership and permission of a volume before being exposed inside the integration runtime pod. This parameter applies only to volume types that support fsGroup-based ownership (and permissions), and has no effect on ephemeral volume types such as secret, configmaps, and emptydir.

Valid values are OnRootMismatch and Always. If not specified, Always is used.

For more information, see Configure volume permission and ownership change policy for Pods.

spec.template.spec.podSecurityContext.runAsGroup

(Only applicable if spec.version resolves to 12.0.10.0-r2 or later)

Specifies that all processes in any containers of the integration runtime pod run with the primary group ID (GID). The primary GID of the containers is set to root (0) if this parameter is omitted. Specify the GID as an integer; for example, 3000.

spec.template.spec.podSecurityContext.runAsNonRoot

(Only applicable if spec.version resolves to 12.0.10.0-r2 or later)

Indicates whether the container must run as a non-root user.

Valid values are true and false.
  • If true, the image is validated at runtime to ensure that the container does not run as UID 0 (root).
  • If unset or false, no validation occurs.

spec.template.spec.podSecurityContext.runAsUser

(Only applicable if spec.version resolves to 12.0.10.0-r2 or later)

Specifies that all processes run with this user ID (UID) for any containers in the integration runtime pod. If unspecified, the value defaults to the user that is specified in the image metadata. Specify the UID as an integer; for example, 1000.

spec.template.spec.podSecurityContext.seccompProfile.localhostProfile

(Only applicable if spec.version resolves to 12.0.10.0-r2 or later)

The spec.template.spec.podSecurityContext.seccompProfile parameters specify the Seccomp options that are used by the containers in the integration runtime pod.

A preconfigured profile that is defined in a file on the node. The profile must be a descending path, relative to the kubelet's configured Seccomp profile location.

Required only if spec.template.spec.podSecurityContext.seccompProfile.type is set to Localhost.

For more information, see Set the Seccomp Profile for a Container.

spec.template.spec.podSecurityContext.seccompProfile.type

(Only applicable if spec.version resolves to 12.0.10.0-r2 or later)

The type of Seccomp profile to apply. Valid options are:

  • Localhost: Use a profile that is defined in a file on the node.
  • RuntimeDefault: Use the container runtime default profile.
  • Unconfined: No profile should be applied.

For more information, see Set the Seccomp Profile for a Container.

spec.template.spec.podSecurityContext.seLinuxOptions.level

(Only applicable if spec.version resolves to 12.0.10.0-r2 or later)

The spec.template.spec.podSecurityContext.seLinuxOptions parameters define the SELinux context to be applied to all containers. If unspecified, the container runtime allocates a random SELinux context for each container. For more information, see Assign SELinux labels to a Container.

An SELinux level label that applies to the container.

spec.template.spec.podSecurityContext.seLinuxOptions.role

(Only applicable if spec.version resolves to 12.0.10.0-r2 or later)

An SELinux role label that applies to the container.

spec.template.spec.podSecurityContext.seLinuxOptions.type

(Only applicable if spec.version resolves to 12.0.10.0-r2 or later)

An SELinux type label that applies to the container.

spec.template.spec.podSecurityContext.seLinuxOptions.user

(Only applicable if spec.version resolves to 12.0.10.0-r2 or later)

An SELinux user label that applies to the container.

spec.template.spec.podSecurityContext.supplementalGroups

(Only applicable if spec.version resolves to 12.0.10.0-r2 or later)

Sets the supplementalGroups on the pod to enable a filesystem to be mounted with the correct permissions so that it can be processed by the runtime.

Specify an array of groups to apply to the first process that is run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the UID of the container process. If unspecified, no additional groups are added to any container.

Group memberships that are defined in the container image for the UID of the container process are still effective, even if they are not included in this array.

Example:
spec:
  template:
    spec:
      podSecurityContext:
        supplementalGroups:
          - 5555

spec.template.spec.podSecurityContext.sysctls

(Only applicable if spec.version resolves to 12.0.10.0-r2 or later)

An array of namespaced sysctl kernel parameters that can be applied to all containers in the integration runtime pod. (sysctl is a Linux-specific command-line tool that is used to configure kernel parameters.)

Specify the properties as name/value pairs. Pods with unsupported sysctl values (by the container runtime) might fail to launch.

Example:
spec:
  template:
    spec:
      podSecurityContext:
        sysctls:
          - name: kernel.shm_rmid_forced
            value: '0'

For more information, see Using sysctls in a Kubernetes Cluster.

spec.template.spec.podSecurityContext.windowsOptions.gmsaCredentialSpec

(Only applicable if spec.version resolves to 12.0.10.0-r2 or later)

The spec.template.spec.podSecurityContext.windowsOptions parameters define Windows-specific settings to apply to all containers.

Defines where the GMSA admission webhook inlines the contents of the GMSA credential spec that is specified in spec.template.spec.podSecurityContext.windowsOptions.gmsaCredentialSpecName.

spec.template.spec.podSecurityContext.windowsOptions.gmsaCredentialSpecName

(Only applicable if spec.version resolves to 12.0.10.0-r2 or later)

The name of the GMSA credential spec to use.

spec.template.spec.podSecurityContext.windowsOptions.hostProcess

(Only applicable if spec.version resolves to 12.0.10.0-r2 or later)

Indicates whether a container should be run as a HostProcess container.

Valid values are true and false.

All of a pod's containers must have the same effective HostProcess value; a mix of HostProcess containers and non-HostProcess containers is not allowed.

spec.template.spec.podSecurityContext.windowsOptions.runAsUserName

(Only applicable if spec.version resolves to 12.0.10.0-r2 or later)

The username in Windows to run the entry point of the container process. If unspecified, defaults to the user that is specified in the image metadata. Specify the username as a string.

spec.template.spec.priority

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

Pod priority settings control which pods get killed, rescheduled, or started to allow the most important pods to keep running.

spec.template.spec.priority specifies an integer value, which various system components use to identify the priority of the (always-on) integration runtime pod. The higher the value, the higher the priority.

If the priority admission controller is enabled, you cannot manually specify a priority value because the admission controller automatically uses the spec.template.spec.priorityClassName setting to populate this field with a resolved value.

spec.template.spec.priorityClassName

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

A priority class name that maps to the integer value of a pod priority in spec.template.spec.priority. If specified, this class name indicates the pod's priority (or importance) relative to other pods.

Valid values are as follows:

  • Specify either of these built-in Kubernetes classes, which mark a pod as critical and indicate the highest priorities: system-node-critical or system-cluster-critical. system-node-critical is the highest available priority.
  • To specify any other priority class, create a PriorityClass object with that class name. For more information, see Pod Priority and Preemption in the Kubernetes documentation.

If you do not specify a class name, the priority is set as follows:

  • The class name of an identified PriorityClass object, which has its globalDefault field set to true, is assigned.
  • If there is no PriorityClass object with a globalDefault setting of true, the priority of the pod is set to 0 (zero).

spec.template.spec.tolerations[].effect

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

To prevent pods from being scheduled onto inappropriate nodes, use taints together with tolerations. Tolerations allow scheduling, but don't guarantee scheduling because the scheduler also evaluates other parameters as part of its function. Apply one or more taints to a node (by running oc taint or kubectl taint with a key, value, and taint effect) to indicate that the node should repel any pods that do not tolerate the taints. Then, apply toleration settings (effect, key, operator, toleration period, and value) to a pod to allow it to be scheduled on the node if the pod's toleration matches the node's taint. For more information, see Taints and Tolerations in the Kubernetes documentation.

If you need to specify one or more tolerations for an always-on integration runtime pod, you can use the collection of spec.template.spec.tolerations[].* parameters to define an array.

For spec.template.spec.tolerations[].effect, specify the taint effect that the toleration should match. (The taint effect on a node determines how that node reacts to a pod that is not configured with appropriate tolerations.) Leave the effect empty to match all taint effects. Alternatively, specify one of these values: NoSchedule, PreferNoSchedule, or NoExecute.

spec.template.spec.tolerations[].key

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

Specify the taint key that the toleration applies to. Leave the key empty and set spec.template.spec.tolerations[].operator to Exists to match all taint keys, values, and effects.

spec.template.spec.tolerations[].operator

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

Specify an operator that represents a key's relationship to the value in spec.template.spec.tolerations[].value. Valid operators are Exists and Equal. Exists is equivalent to a wildcard for the toleration value, and indicates that the pod can tolerate all taints of a particular category.

Equal

spec.template.spec.tolerations[].tolerationSeconds

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

Optionally specify a period of time in seconds that determines how long the pod stays bound to a node with a matching taint before being evicted. Applicable only for a toleration with a NoExecute effect (which indicates that a pod without the appropriate toleration should immediately be evicted from the node).

By default, no value is set, which means that a pod that tolerates the taint will never be evicted. Zero and negative values are treated as 0 (evict immediately).

spec.template.spec.tolerations[].value

(Only applicable if spec.version resolves to 12.0.9.0-r2 or later)

Specify the taint value that the toleration matches to. If the operator is Exists, leave this value empty.

spec.template.spec.topologySpreadConstraint[].*

(Only applicable if spec.version resolves to 13.0.4.1-r1 or later)

Specify custom topology spread constraints that control how to distribute or spread pods across topological domains such as zones, nodes, or regions in a multi-zone or multi-node cluster. You can use these settings to configure high availability and fault tolerance by spreading workloads evenly across domains.

You can specify an array of topology spread constraints that allow you to define the following settings:

  • A maximum skew value (required) that describes the degree to which the integration runtime pods might be unevenly distributed.
  • A topology key (required) of node labels, which can be used to identify nodes that are in the same topology. Valid values include:
    • The well-known label keys such as kubernetes.io/hostname, which indicates that each node is a domain of that topology, and topology.kubernetes.io/zone, which indicates that each zone is a domain of that topology
    • Private (unqualified) label keys
  • A whenUnsatisfiable value (required) of DoNotSchedule (the default) or ScheduleAnyway, which indicates whether the scheduler should schedule a pod that doesn't satisfy the spread constraint.
  • A label selector that is used to find matching pods and calculate the number of pods in their corresponding topology domain.
  • One or more matchLabelKeys values that define pod label keys which are used to select the pods over which spreading is calculated.
  • A minimum number of eligible domains, each of which defines a particular instance of a topology whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy.
  • A nodeAffinityPolicy value that defines how the pod's nodeAffinity/nodeSelector are treated when calculating the pod topology spread skew. Valid values are Honor, which includes only nodes that match nodeAffinity/nodeSelector in the calculations, and Ignore, which ignores nodeAffinity/nodeSelector and includes all nodes in the calculations.
  • A nodeTaintsPolicy value that defines how node taints are treated when calculating the pod topology spread skew. Valid values are Honor, which includes nodes without taints as well as tainted nodes for which the incoming pod has a toleration, and Ignore, which ignores node taints and includes all nodes.

The following example shows the structure of the supported topology spread constraints parameters:

spec:
  template:
    spec:
      topologySpreadConstraint:
        - labelSelector:
            matchExpressions:
              - values:
                  - InOrNotInValue1
                  - InOrNotInValue2
                key: labelKey
                operator: In
            matchLabels:
              customkey: value
          matchLabelKeys:
            - podLabelKey1
            - podLabelKey2
          maxSkew: 2
          topologyKey: kubernetes.io/hostname
          whenUnsatisfiable: ScheduleAnyway
          minDomains: 5
          nodeAffinityPolicy: Ignore
          nodeTaintsPolicy: Honor

For more information about these settings and how they work together, see Pod Topology Spread Constraints in the Kubernetes documentation and Controlling pod placement by using pod topology spread constraints in the Red Hat OpenShift documentation.

For a tutorial on how to configure topology spread constraints for Dashboard and integration runtime pods, see Configuring topology spread constraints for App Connect Dashboard and integration runtime pods.

spec.template.spec.volumes

Details of one or more named volumes that can be provided to the pod, to use for persisting data. Each volume must be configured with the appropriate permissions to allow the integration runtime to read or write to it as required.

Follows the Volume specification at https://pkg.go.dev/k8s.io/api/core/v1#VolumeMount. For more information, see Volumes in the Kubernetes documentation.

Specify custom settings for your volume types as an array. Use spec.template.spec.volumes with spec.template.spec.containers[].volumeMounts.

The following example illustrates how to add an empty directory (as a volume) to the /cache folder in an integration runtime's pod. The mount into the runtime container is read-write by default.

spec:
  template:
    spec:
      containers:
        - resources:
            requests:
              cpu: 300m
              memory: 368Mi
          volumeMounts:
            - name: cache-volume
              mountPath: /cache
          name: runtime
      volumes:
        - name: cache-volume
          emptyDir: {}
 

spec.version

The product version that the integration runtime is based on. Can be specified by using a channel or as a fully qualified version. If you specify a channel, you must ensure that the license aligns with the latest fully qualified version in the channel.

If you are using IBM App Connect Operator 7.1.0 or later, the supported channels or versions will depend on the Red Hat OpenShift version that is installed in your cluster.

To view the available values that you can choose from and the licensing requirements, see spec.version values and Licensing reference for IBM App Connect Operator.

13.0

Load balancing

When you deploy an integration runtime, routes are created by default in Red Hat OpenShift to externally expose a service that identifies the set of pods where the integration runs. Load balancing is applied when incoming traffic is forwarded to replica pods, and the routing algorithm used depends on the type of security you've configured for your flows:

  • http flows: These flows use a non-SSL route that has been modified to use the round-robin approach in which each replica is sent a message in turn.
  • https flows: These flows use an SSL-passthrough route that has been modified to use the round-robin approach in which each replica is sent a message in turn.

To change the load balancing configuration that a route uses, you can add an appropriate annotation to the route resource. For example, the following CR setting will switch the route to use the round-robin algorithm:

spec:
  annotations:
    haproxy.router.openshift.io/balance: roundrobin

For more information about the available annotation options, see Route-specific annotations in the Red Hat OpenShift documentation.

Flow analysis

In earlier versions of the IBM App Connect Operator, when you create an integration runtime at version 12.0.12.5-r1 or earlier, the Operator loads the BAR files that are referenced in the spec.barURL custom resource setting and then analyzes the files to identify what type of flows are configured. The Operator analyzes only BAR files that are stored in an App Connect Dashboard instance or in remote repositories, and does not analyze any BAR files that are baked into a container image. When the analysis is complete, the runtime pod is configured as follows with the appropriate number of containers that are needed to process the flows.

Type of flow Number of containers required Container names
Toolkit flow One runtime
Designer API flow Two runtime and designerflows
Designer Event flow Four runtime, designerflows, designereventflows, and proxy
Flows that include a Batch node

(Applicable only for integration runtimes at version 13.0.2.2-r1 or later)

Four runtime, designerflows, designereventflows, and proxy

IBM App Connect Enterprise is being updated to support the App Connect Designer connectors, which are referred to as Discovery connectors in IBM App Connect Enterprise. As a result, the way in which the BAR file analysis works has been updated for integration runtimes at version 13.0.1.0-r1 or later. Instead of just analyzing the type of flows, the Operator also analyzes all the individual nodes in all the flows across all BAR files (both IBM App Connect Enterprise and Designer).

  • If all the nodes found are supported by IBM App Connect Enterprise, the IBM App Connect Operator creates only the single runtime container.
  • If any nodes are unsupported, the Operator reverts to the previous behavior and creates the appropriate containers.
  • Batch technology isn't currently supported in IBM App Connect Enterprise so if the analysis detects a Designer flow with a Batch node, the Operator reverts to the previous behavior (even if all nodes are supported by IBM App Connect Enterprise).

The following list of Designer connectors are now supported in IBM App Connect Enterprise:

  • Amazon CloudWatch request node
  • Amazon DynamoDB request node
  • Amazon EC2 request node
  • Amazon EventBridge request node
  • Amazon EventBridge event node
  • Amazon Kinesis request node
  • Amazon Redshift request node
  • Amazon Redshift event node
  • Amazon RDS request node
  • Amazon S3 request node
  • Amazon SES request node
  • Amazon SNS request node
  • Amazon SQS request node
  • Anaplan request node
  • Asana event node
  • Asana request node
  • AWS Lambda request node
  • BambooHR request node
  • Box request node
  • Businessmap request node
  • Businessmap event node
  • Calendly request node
  • ClickSend request node
  • ClickSend event node
  • CMIS request node
  • CMIS event node
  • Confluence request node
  • Connector Development Kit input nodes
  • Couchbase request node
  • Coupa request node
  • Coupa event node
  • Crystal Ball request node
  • CSV parse node
  • DocuSign request node
  • Dropbox request node
  • Email request node
  • Email event node
  • Eventbrite request node
  • Eventbrite event node
  • Expensify request node
  • Factorial HR request node
  • For Each Node
  • For Each NodeInput
  • For Each NodeOutput
  • Front request node
  • Front event node
  • GitHub request node
  • GitHub event node
  • GitLab request node
  • GitLab event node
  • Gmail request node
  • Gmail event node
  • Google Analytics request node
  • Google Calendar request node
  • Google Calendar event node
  • Google Chat request node
  • Google Cloud BigQuery request node
  • Google Cloud Pub/Sub request node
  • Google Cloud Pub/Sub event node
  • Google Cloud Storage request node
  • Google Contacts request node
  • Google Drive request node
  • Google Gemini request node
  • Google Groups request node
  • Google Sheets request node
  • Google Sheets event node
  • Google Tasks request node
  • Google Translate request node
  • Greenhouse event node
  • Greenhouse request node
  • HubSpot CRM request node
  • HubSpot Marketing request node
  • Hunter request node
  • IBM Aspera request node
  • IBM Cloudant® request node
  • IBM Cloud Object Storage S3 request node
  • IBM Db2® request node
  • IBM Db2 event node
  • IBM Engineering Workflow Management request node
  • IBM Engineering Workflow Management event node
  • IBM FileNet® Content Manager request node
  • IBM Food Trust request node
  • IBM Maximo® request node
  • IBM Maximo event node
  • IBM OpenPages® with Watson request node
  • IBM OpenPages with Watson event node
  • IBM Planning Analytics request node
  • IBM Sterling™ Intelligent Promising request node
  • IBM Sterling Order Management System request node
  • IBM Targetprocess request node
  • IBM Targetprocess event node
  • IBM Watson Discovery request node
  • IBM watsonx.ai request node
  • IBM z/OS Connect request node
  • If node
  • Infobip request node
  • Insightly request node
  • Insightly event node
  • JDBC request node
  • Jenkins request node
  • Jira request node
  • Jira event node
  • JSON parse node
  • JSON set variable node
  • LDAP request node
  • LDAP event node
  • Magento request node
  • Magento event node
  • MailChimp request node
  • MailChimp event node
  • Marketo request node
  • Marketo event node
  • Microsoft Active Directory request node
  • Microsoft Active Directory event node
  • Microsoft Azure Blob Storage request node
  • Microsoft Azure Cosmos DB request node
  • Microsoft Azure Event Hubs request node
  • Microsoft Azure Event Hubs event node
  • Microsoft Azure Service Bus request node
  • Microsoft Azure Service Bus event node
  • Microsoft Dynamics 365 for Finance and Operations request node
  • Microsoft Dynamics 365 for Sales request node
  • Microsoft Dynamics 365 for Sales event node
  • Microsoft Entra ID request node
  • Microsoft Entra ID event node
  • Microsoft Excel Online request node
  • Microsoft Excel Online event node
  • Microsoft Exchange request node
  • Microsoft Exchange event node
  • Microsoft OneDrive for Business request node
  • Microsoft OneNote request node
  • Microsoft Power® BI request node
  • Microsoft SharePoint request node
  • Microsoft SharePoint event node
  • Microsoft SQL Server request node
  • Microsoft SQL Server event node
  • Microsoft Teams request node
  • Microsoft Teams event node
  • Microsoft To Do request node
  • Microsoft Viva Engage request node
  • Microsoft Viva Engage event node
  • Milvus request node
  • monday.com request node
  • monday.com event node
  • MySQL request node
  • MySQL event node
  • Oracle Database request node
  • Oracle Database event node
  • Oracle E-Business Suite request node
  • Oracle Human Capital Management request node
  • Oracle Human Capital Management event node
  • Pinecone Vector Database request node
  • PostgreSQL request node
  • PostgreSQL event node
  • Redis request node
  • REST request node
  • Salesforce request node
  • Salesforce Account Engagement request node
  • Salesforce Account Engagement event node
  • Salesforce Commerce Cloud Digital Data request node
  • Salesforce Commerce Cloud Digital Data event node
  • Salesforce Marketing Cloud request node
  • SAP Ariba request node
  • SAP (via OData) request node
  • SAP SuccessFactors request node
  • ServiceNow request node
  • ServiceNow event node
  • Shopify request node
  • Shopify event node
  • Slack request node
  • Slack event node
  • Snowflake request node
  • Splunk request node
  • Square request node
  • SurveyMonkey request node
  • SurveyMonkey event node
  • The Weather Company request node
  • Toggl Track request node
  • Toggl Track event node
  • Trello request node
  • Twilio request node
  • UKG request node
  • Vespa request node
  • WordPress request node
  • Workday request node
  • Wrike request node
  • Wrike event node
  • Wufoo request node
  • Wufoo event node
  • XML parse node
  • Yapily request node
  • Zendesk Service request node
  • Zendesk Service event node
  • Zoho Books request node
  • Zoho Books event node
  • Zoho CRM request node
  • Zoho CRM event node
  • Zoho Inventory request node
  • Zoho Recruit request node
  • Zoho Recruit event node