App Connect Integration Runtime reference
Use this reference to create, update, or delete integration runtimes by using the Red Hat® OpenShift® web console or CLI, or the CLI for a Kubernetes environment.
- Introduction
- Prerequisites
- Red Hat OpenShift SecurityContextConstraints requirements
- Resources required
- Mechanisms for providing BAR files to an integration runtime
- Pulling BAR files from a repository that is located behind a proxy
- Configuring OpenTelemetry tracing for integration runtimes
- Configuring Knative serverless support on Red Hat OpenShift
- Creating an instance
- Updating the custom resource settings for an instance
- Deleting an instance
- Custom resource values
- Load balancing
- Flow analysis
- Supported platforms
Introduction
The App Connect Integration Runtime API enables you to create integration runtimes, which run integrations that were created in App Connect Designer or IBM® App Connect Enterprise Toolkit. Integration runtimes enable you to run an integration solution as an always-on deployment that is always running or available in your cluster. Integration runtimes also provide serverless support with Knative, which enables you to deploy API flows from App Connect Designer into containers that start on demand when requests are received. You can also configure OpenTelemetry tracing of Toolkit flows to make your integration runtime deployments observable.
Integration runtimes can also be created, updated, or deleted from an App Connect Dashboard instance. For more information, see:
In the Dashboard, integration runtimes run as always-on deployments. Serverless Knative Service deployments, which run Designer API flows only, are not supported in the Dashboard and can be created only from the Red Hat OpenShift web console or CLI. Serverless integration runtimes are also not visible in the Dashboard.
Prerequisites
- If using Red Hat
OpenShift, Red Hat
OpenShift Container Platform
4.12, 4.14, 4.15, 4.16, 4.17, 4.18, 4.19, or 4.20 is required.
Note: For information about the custom resource (or operand) versions that are supported for each Red Hat OpenShift version, see spec.version values.
- If using a Kubernetes environment, Kubernetes 1.27.x, 1.28.x, 1.29.x, 1.30.x, 1.31.x, 1.32.x, or 1.33.x is required. For more information, see Minimum requirements.
- The IBM App Connect Operator must be installed in your cluster either through an independent deployment or an installation of IBM Cloud Pak for Integration. For further details, see the following information:
- Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).
- If you want to use the command-line interface (CLI) to log in to your cluster and run commands to create and manage your IBM App Connect instances and other resources, ensure that the required CLI tools are installed on your computer. For more information, see Installing tools for managing the cluster, containers, and other resources (on Red Hat OpenShift), or Installing tools for managing the cluster, containers, and other resources (on Kubernetes).
- You must have a Kubernetes pull secret
called
ibm-entitlement-keyin the namespace before creating the instance. For more information, see Obtaining and applying your IBM Entitled Registry entitlement key.
Red Hat OpenShift SecurityContextConstraints requirements
IBM App Connect runs under the default restricted
SecurityContextConstraints.
Resources required
Minimum recommended requirements:
- Toolkit integration for compiled BAR files:
- CPU: 0.1 Cores
- Memory: 0.35 GB
- Toolkit integration for uncompiled BAR files:
- CPU: 0.3 Cores
- Memory: 0.35 GB
- Designer-only integration or hybrid integration:
- CPU: 1.7 Cores
- Memory: 1.77 GB
For information about how to configure these values, see Custom resource values.
If you are building and running your own containers, you can choose to allocate less than 0.1 Cores for Toolkit integrations if necessary. However, this decrease in CPU for the integration runtime container might impact the startup times and performance of your flow. If you begin to experience issues that are related to performance, or with starting and running your integration runtime, check whether the problem persists by first increasing the CPU to at least 0.1 Cores before contacting IBM support.
Mechanisms for providing BAR files to an integration runtime
Integration servers and integration runtimes require two types of resources: BAR files that contain development resources, and configuration files (or objects) for setting up the integration servers or integration runtimes. When you create an integration server or integration runtime, you are required to specify one or more BAR files that contain the development resources of the App Connect Designer or IBM App Connect Enterprise Toolkit integrations that you want to deploy.
A number of mechanisms are available for providing these BAR files to integration servers and integration runtimes. Choose the mechanism that meets your requirements.
| Mechanism | Description | BAR files per integration server or integration runtime |
|---|---|---|
|
Content server |
When you use the App Connect Dashboard to upload or import BAR files for deployment to integration servers or integration runtimes, the BAR files are stored in a content server that is associated with the Dashboard instance. The content server is created as a container in the Dashboard deployment and can either store uploaded (or imported) BAR files in a volume in the container’s file system, or store them within a bucket in a simple storage service that provides object storage through a web interface. The location of a BAR file in the content server is generated as a BAR URL when a BAR file is uploaded or imported to the Dashboard. This location is specified by using the Bar URL field or spec.barURL parameter in the Dashboard custom resource (CR). While creating an integration server or integration runtime, you can choose only one BAR file to deploy from the content server and must reference its BAR URL in the content server. The integration server or runtime then uses this BAR URL to download the BAR file on startup, and processes the applications appropriately. If you are creating an integration server or integration runtime from the Dashboard, and use the Integrations view to specify a single BAR file to deploy, the location of this file in the content server will be automatically set in the Bar URL field or spec.barURL parameter in the Properties (or Server) view. For more information, see Creating an integration server to run your BAR file resources (for Designer integrations), Creating an integration server to run IBM App Connect Enterprise Toolkit integrations, and Creating an integration runtime to run your BAR file resources. If you are creating an integration server or integration runtime from the Red Hat
OpenShift web console or CLI, or the Kubernetes CLI, and want to deploy a BAR file from the content server, you must obtain the BAR file location
from the The location of a BAR file in the content server is typically generated in the following format:
Where:
For example:
|
1 |
|
External repository (Applicable only if spec.version resolves to 12.0.1.0-r1 or later) |
While creating an integration server or integration runtime, you can choose to deploy multiple BAR files, which are stored in an external HTTPS repository system, to the integration server or integration runtime. You might find this option useful if you have set up continuous integration and continuous delivery (CI/CD) pipelines to automate and manage your DevOps processes, and are building and storing BAR files in a repository system such as JFrog Artifactory. This option enables you to directly reference one or more BAR files in your integration server or
integration runtime CR without the need to manually upload or import the BAR files to the content
server in the App Connect Dashboard or build a custom image. You will need to
provide basic (or alternative) authentication credentials for connecting to the external endpoint
where the BAR files are stored, and can do so by creating a configuration object of type
If you are creating an integration server or integration runtime from the Dashboard, you can use
the Configuration view to create (and select) a configuration object of type
BarAuth that defines the required credentials. You can then use the
Properties (or Server) view to specify the endpoint
locations of one or more BAR files in the Bar URL field or as the
spec.barURL value. If you want to be able to use the App Connect Dashboard to view your integration server or integration runtime, also
set the following parameter:
If you are creating an integration server or integration runtime from the Red Hat
OpenShift web console or CLI, or the Kubernetes CLI, you must create a configuration object of type
BarAuth that defines the
required credentials, as described in Configuration reference and BarAuth type. When you create the integration
server or integration runtime CR, you must specify the name of the configuration object in
spec.configurations and then specify the endpoint locations of one or more BAR
files in spec.barURL. If you want to be able to use the App Connect Dashboard to view your integration server or integration runtime, also
set the following parameter:
You can specify multiple BAR files as follows:
Tip: If you are using GitHub as an external repository, you must specify the
rawURL in the Bar URL field or in spec.barURL. For example: https://raw.github.ibm.com/somedir/main/bars/getHostAPI.bar
https://github.com/johndoe/somedir/raw/main/getHostAPI.bar
https://raw.githubusercontent.com/myusername/myrepo/main/My%20API.barSome considerations apply if deploying multiple BAR files:
|
Multiple |
|
Custom image |
You can build a custom server runtime image that contains all the configuration for the integration server or integration runtime, including all the BAR files or applications that are required, and then use this image to deploy an integration server or integration runtime. When you create the integration server or integration runtime CR, you must reference this image
by using the following parameter:
For example:
This image must be built from the version that is specified as the spec.version value in the CR. Channels are not supported when custom images are used. |
Multiple |
Pulling BAR files from a repository that is located behind a proxy
If you configure an integration runtime at version 13.0.2.0-r1 or later to pull BAR files from a file repository that sits behind a proxy, additional configuration steps are required.
Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).
Complete the following steps:
- Update the Subscription resource for the IBM App Connect Operator with the
following environment variables to enable the Operator to pull the BAR files and analyze them to
identify what type of flows are configured.
- From the command line, log in to your cluster by using the oc login command or the relevant command for your Kubernetes environment.
- Run the oc edit or kubectl edit command to partially
update the subscription, where namespaceName is the namespace where the Operator
is installed.
oc edit subscription ibm-appconnect -n namespaceName
kubectl edit subscription ibm-appconnect -n namespaceNameThe Subscription CR automatically opens in the default text editor for your operating system.
- Update the spec section of the file as follows. Note:
- To obtain the values (valueX and valueY) for HTTP_PROXY and HTTPS_PROXY, contact your application team that maintains the proxy.
- If you are running services that need to be connected to directly, additional endpoints might be required for NO_PROXY.
spec: config: env: - name: HTTP_PROXY value: valueX - name: HTTPS_PROXY value: valueY - name: NO_PROXY value: .cluster.local,.svc,10.0.0.0/8,127.0.0.1,172.0.0.0/8,192.168.0.0/16,localhost - Save the YAML definition and close the text editor to apply the changes.
- For every integration runtime that references a BAR file which is located behind a proxy, add
the proxy to the IntegrationRuntime CR definition by updating the spec section
to declare a set of environment variables.Note:
- To obtain the values (valueX and valueY) for HTTP_PROXY and HTTPS_PROXY, contact your application team that maintains the proxy.
- If you are running services that need to be connected to directly, additional endpoints might be required for NO_PROXY.
spec: template: spec: containers: - name: runtime env: - name: HTTP_PROXY value: valueX - name: HTTPS_PROXY value: valueY - name: NO_PROXY value: .cluster.local,.svc,10.0.0.0/8,127.0.0.1,172.0.0.0/8,192.168.0.0/16,localhostYou can declare these environment variables while creating the integration runtime or you can update the integration runtime by editing its CR. For example, you can run the oc edit or kubectl edit command to partially update the instance, where instanceName is the name (metadata.name value) of the instance and namespaceName is the namespace where the instance is deployed.
oc edit integrationruntime instanceName -n namespaceName
kubectl edit integrationruntime instanceName -n namespaceName
Configuring OpenTelemetry tracing for integration runtimes
If you are deploying an integration runtime for a Toolkit integration, you can configure OpenTelemetry tracing for all message flows in the integration runtime, and export span data to an OpenTelemetry collector.
When OpenTelemetry is enabled for an integration runtime, spans are created for all message flow nodes that support OpenTelemetry, including callable flow nodes.
For supported transport input nodes, spans are created and then run until the message flow transaction for the flow is completed or rolled back. For supported transport request nodes, spans are created and then run for the duration of the node interaction with the transport (for example, sending and receiving an HTTP message). For information about the transport message flow nodes that support OpenTelemetry trace, see Configuring OpenTelemetry trace for an integration server in the IBM App Connect Enterprise documentation.
An Observability agent or backend system such as Instana, Jaeger, or Zipkin must be available and configured to receive OpenTelemetry data. When you deploy an integration runtime, configuration options are provided for you to enable OpenTelemetry tracing.
Instana-specific configuration
com.instana.plugin.opentelemetry:
enabled: trueFor more information, see Activating OpenTelemetry Support in the Instana documentation.
Configuring Knative serverless support on Red Hat OpenShift
IBM App Connect provides support for serverless deployments of your integration runtimes by using Knative. Knative is an open source solution that provides tools and utilities to enable containers to run as serverless workloads on Kubernetes clusters. This model enables resources to be provisioned on demand, and to be scaled based on requests.
- Serverless support is available only on Red Hat OpenShift.
- Serverless support is available only with the
AppConnectEnterpriseNonProductionFREEandCloudPakForIntegrationNonProductionFREElicenses. - Serverless support is restricted to BAR files that contain API flows only, which are exported from App Connect Designer or App Connect on IBM Cloud. As an additional restriction, BAR files that were created before May 2020 are not supported in their initial state and need to be updated to make them suitable for serverless deployment. For more information, see Configuration requirement for BAR files that were created before May 2020.
As a one-time requirement for creating serverless deployments of integration runtimes, you must install the Knative components, which include Knative Serving, in your cluster, and enable required Knative flags. You must also ensure that a pull secret is available for accessing the required App Connect images. For more information, see Installing and configuring Knative Serving in your Red Hat OpenShift cluster.
For each serverless deployment that you want to create for an integration runtime, you must also enable (and optionally configure) serverless Knative service settings in the custom resource, as described in Creating an instance. The installed Knative Serving component will deploy and run containers as Knative services, and manage the state, revision tracking, routing, and scaling of these services.
To see an example that describes how to configure, run, and test a serverless deployment, see the Introduction to Serverless with the App Connect Operator blog.
Configuration requirement for BAR files that were created before May 2020
Older Designer API
BAR files that were created before May 2020 need to be updated to make
them valid for serverless deployment. Before attempting a serverless deployment of such a BAR file,
you must update the BAR file configuration by running the ibmint apply overrides
command to override a set of values in the broker archive deployment
descriptor:
- Create a text file with a preferred name (filename.txt), to define the overrides to apply.
- To view the properties that you need to override in the BAR file (for example,
flow_name.bar), complete the following steps:
- Extract the contents of the BAR file to a preferred directory. Then, locate the compressed file named flow_name.appzip and extract its contents, which include a META-INF/broker.xml file.
- Open the META-INF/broker.xml file to view the configurable properties,
which are specified in the following format:
<ConfigurableProperty uri="xxxx"/><ConfigurableProperty override="someValue" uri="xxxx"/> - Locate each entry with a
urivalue that is specified as follows, where subflowName identifies an operation from the original API flow:{{subflowName}}#HTTP Request.URLSpecifier {{subflowName}}#HTTP Request (no auth).URLSpecifierTypically, each operation or subflowName contains a pair of
HTTP Request.URLSpecifierandHTTP Request (no auth).URLSpecifierentries. For example, if the original API flow defines two operations to create a customer and retrieve a customer by ID, the relevant entries in the META-INF/broker.xml file might look like this:... <ConfigurableProperty override="http+unix://%2Ftmp%2Flmap.socket" uri="Customer_create#HTTP Request.URLSpecifier"/> <ConfigurableProperty override="http+unix://%2Ftmp%2Flmap.socket" uri="Customer_create#HTTP Request (no auth).URLSpecifier"/> ... <ConfigurableProperty override="http+unix://%2Ftmp%2Flmap.socket" uri="Customer_findById#HTTP Request.URLSpecifier"/> <ConfigurableProperty override="http+unix://%2Ftmp%2Flmap.socket" uri="Customer_findById#HTTP Request (no auth).URLSpecifier"/> ...
- In the filename.txt file, add lines in the following
format to override these
properties:
subflowName#HTTP Request.URLSpecifier=http://localhost:3002 subflowName#HTTP Request (no auth).URLSpecifier=http://localhost:3002For example:
Customer_create#HTTP Request.URLSpecifier=http://localhost:3002 Customer_create#HTTP Request (no auth).URLSpecifier=http://localhost:3002 Customer_findById#HTTP Request.URLSpecifier=http://localhost:3002 Customer_findById#HTTP Request (no auth).URLSpecifier=http://localhost:3002 - Save and close the filename.txt override file.
- Run the following command to apply the overrides to the BAR file:
ibmint apply overrides overridesFilePath --input-bar-file barfilePath --output-bar-file newbarfilePathWhere:- overridesFilePath is the name and path of the filename.txt override file.
- barfilePath is the name and path of the existing flow_name.bar file that you want to update.
- newbarfilePath is the name and path of a new BAR file that will be generated with the updated configuration.
For example:
ibmint apply overrides /some/path/filename.txt --input-bar-file /some/path/flow_name.bar --output-bar-file /some/path/newflow_name.bar - Use the updated BAR file to create your integration runtime.
Creating an instance
You can create an integration runtime by using the Red Hat OpenShift web console or CLI, or the CLI for a Kubernetes environment.
The supplied App Connect Enterprise base image includes an IBM MQ client for connecting to remote queue managers that are within the same Red Hat OpenShift cluster as your deployed integration runtimes, or in an external system such as an appliance.
Before you begin
- Ensure that the Prerequisites are met.
- Prepare the BAR files that you want to deploy to the integration runtime. For more information, see Mechanisms for providing BAR files to an integration runtime.
- Decide how to control upgrades to the instance when a new version
becomes available. The spec.version value that you specify while creating the
instance will determine how that instance is upgraded after installation, and whether you will need
to specify a different license or version number for the upgrade. To help you decide whether to
specify a spec.version value that either lets you subscribe to a channel for
updates, or that uses a specific version for the instance, review the Upgrade considerations for channels, versions, and licenses before
you start this task.Namespace restriction for an instance, server, configuration, or trace:
The namespace in which you create an instance or object must be no more than 40 characters in length.
Creating an instance from the Red Hat OpenShift web console
When you create an integration runtime, you can define which configurations you want to apply to the runtime.
- If required, you can create configuration objects before you create an integration runtime and then add references to those objects while creating the runtime. For information about how to use the Red Hat OpenShift web console to create a configuration object before you create an integration runtime, see Creating a configuration object from the Red Hat OpenShift web console.
- If you have existing configuration objects that you want the integration runtime to reference, you can add those references while creating the runtime, as described in the steps that follow.
To create an integration runtime by using the Red Hat OpenShift web console, complete the following steps:
- From
a browser window, log in to the Red Hat
OpenShift Container Platform web console. Ensure that you
are in the Administrator perspective
. - From the navigation, click .
- If required, select the namespace (project) in which you installed the IBM App Connect Operator.
- From the Installed Operators page, click IBM App Connect.
- From the
Operator details
page for the App Connect Operator, click the Integration Runtime tab. Any previously created integration runtimes are displayed in a table. - Click Create IntegrationRuntime.
From the Details tab on the
Operator details
page, you can also locate the Integration Runtime tile and click Create instance to specify installation settings for the integration runtime. - As a starting point, click YAML view to switch to the YAML editor. Then,
either copy and paste one of the YAML samples from Creating an instance from the Red Hat OpenShift or Kubernetes CLI
into the YAML editor, or try one of the YAML samples from the Samples tab.
Update the content of the YAML editor with the parameters and values that you require for this integration runtime. The YAML editor offers a finer level of control over your installation settings, but you can switch between this view and the Form view.
- To view the full set of parameters and values available, see Custom resource values.
- For licensing information, see Licensing reference for IBM App Connect Operator.
- Specify the locations of one or more BAR files that you want to deploy. You can use the
spec.barURL parameter to either specify the URL to a BAR file that is stored in
the content server, or specify one or more BAR files in an external endpoint, as shown in the
following examples. If you are deploying BAR files that are stored in an external endpoint, you will
also need a configuration object of type
BarAuththat contains credentials for connecting to this endpoint. For more information, see Mechanisms for providing BAR files to an integration runtime.spec: barURL: - >- https://db-01-quickstart-dash.ace-test:3443/v1/directories/CustomerDatabaseV1?873fe600-9ac6-4096-c00f-55e361fec2e5spec: barURL: - 'https://artifactory.com/myrepo/getHostAPI.bar' - 'https://artifactory.com/myrepo/CustomerDbV1.bar' - You can specify one or more (existing) configurations that you want to apply by using the
spec.configurations parameter. For example:
spec: configurations: - odbc-ini-dataor
spec: configurations: - odbc-ini-data - accountsdataor
spec: configurations: ["odbc-ini-data", "accountsdata"]For the spec.configurations values, specify the metadata.name values for the relevant configuration objects.
Note:If this integration runtime contains a callable flow, you must configure the integration runtime to use a switch server that you created earlier. For information about how to create a switch server, see App Connect Switch Server reference. Locate the name of the switch server and then use it to configure the integration runtime as follows:
- From the command line, run the following command, where switchName is the
metadata.name value that was specified while creating the switch
server:
oc get switchserver switchName - Make a note of the
AGENTCONFIGURATIONNAMEvalue that is shown in the output.NAME RESOLVEDVERSION CUSTOMIMAGES STATUS AGENTCONFIGURATIONNAME AGE default 13.0.6.0-r1 false Ready default-agentx 1h - Add the
AGENTCONFIGURATIONNAMEvalue to the spec.configurations parameter; for example:configurations: - default-agentx
A configuration object of type
REST Admin SSL files(oradminssl) is created and applied by default to the integration runtime to provide self-signed TLS certificates for secure communication between the App Connect Dashboard and the runtime. This configuration object is created from a predefined ZIP archive, which contains a set of PEM files named ca.crt.pem, tls.crt.pem, and tls.key.pem. A secret is also auto generated to store the Base64-encoded content of this ZIP file. When you deploy the integration runtime, the configuration name is stored in spec.configurations asintegrationRuntimeName-ir-adminssl, where integrationRuntimeName is the metadata.name value for the integration runtime. For more information about this configuration type, see REST Admin SSL files type. - From the command line, run the following command, where switchName is the
metadata.name value that was specified while creating the switch
server:
- If you are deploying an integration runtime for a Toolkit integration and want to configure OpenTelemetry tracing for all message flows, you can use the
spec.telemetry.tracing.openTelemetry.* parameters to enable OpenTelemetry tracing and configure your preferred settings. An
example of the standard settings is as follows:
spec: telemetry: tracing: openTelemetry: tls: secretName: mycert-secret caCertificate: ca.crt enabled: true protocol: grpc endpoint: 'status.hostIP:4317'Note:- If you are using Instana as your Observability
agent, setting
spec.telemetry.tracing.openTelemetry.enabled to
trueis typically the only configuration needed for OpenTelemetry tracing, and you do not need to configure any other settings. - In an Cloud Pak for Integration environment with Instana configured, one Instana agent typically runs on each physical worker node in the cluster by default. In this scenario, it is advisable to leave spec.telemetry.tracing.openTelemetry.endpoint unspecified when OpenTelemetry tracing is enabled. This results in the container being configured to use the Instana agent that is on the physical worker where the container is started. (In most cases, the agent will be available locally on the worker node where App Connect is running.) If preferred, you can use spec.telemetry.tracing.openTelemetry.endpoint to specify a different IP address and port for the agent (on a different physical worker node).
- You can configure additional OpenTelemetry properties by using
a server.conf.yaml file, which contains an additional set of OpenTelemetry properties. To configure these additional properties,
use the server.conf.yaml file to create a configuration object of type
server.conf.yamlthat can be applied to the integration runtime. For more information, see Configuring additional OpenTelemetry properties by using the server.conf.yaml file.
- If you are using Instana as your Observability
agent, setting
spec.telemetry.tracing.openTelemetry.enabled to
- If you want to add one or more topology spread constraints that control how to distribute or spread pods across topological domains such as zones, nodes, or regions in a multi-zone or multi-node cluster, use the Template/spec/Advanced configuration/Topology Spread Constraints fields or the spec.template.spec.topologySpreadConstraint[].* parameters. You can use these settings to configure high availability and fault tolerance by spreading workloads evenly across domains. These settings are applicable only for a channel or version that resolves to 13.0.4.1-r1 or later. For information about the values that you can specify, see the spec.template.spec.topologySpreadConstraint[].* parameters in Custom resource values.
- If you want to create a Knative serverless deployment for an
integration runtime that contains a Designer API flow on Red Hat
OpenShift,
you can enable Knative Serving by setting the
spec.serverless.knativeService.enabled parameter to
true. For example:spec: serverless: knativeService: enabled: trueKnative Serving must be installed and configured in your cluster. For more information, see Configuring Knative serverless support on Red Hat OpenShift and Installing and configuring Knative Serving in your Red Hat OpenShift cluster.
- Desired run state (default: running): Use this field to indicate that you
want to stop or start the integration runtime:
stopped: Stops the integration runtime if running, scales down its replica pods to zero (0), and changes the state toStopped. (The original number of replicas is retained in the spec.replicas setting in the integration runtime CR.)You can choose to create your integration runtime in a
Stoppedstate if preferred, or you can edit the settings of your running integration runtime to stop it.running: Starts the integration runtime when in aStoppedstate, starts the original number of replica pods (as defined in spec.replicas), and changes the state toReady. This is the default option so if you leave the field blank, a value ofrunningis assumed.
The stop or start action is applied to all replica pods that are provisioned for the integration runtime to ensure that a consistent state is maintained across the pods.
This field is applicable only for a channel or version that resolves to 13.0.2.2-r1 or later.
- Ingress: Use these fields to automatically create ingress resources for
your deployed integration runtime. In Kubernetes environments, ingress
resources are required to expose your integration runtimes to external traffic.
These fields are applicable only for an IBM Cloud Kubernetes Service environment, and for a channel or version that resolves to 13.0.2.1-r1 or later.
- Ingress/Enabled: Indicate whether to automatically create ingress resources for your deployed integration runtime. The creation of ingress resources for an integration runtime is disabled by default.
- Ingress/Domain: If you do not want the ingress routes to be constructed
with the IBM-provided ingress subdomain of your IBM Cloud Kubernetes Service cluster, specify a preferred custom subdomain that is created in
the cluster.
This field is displayed only when Ingress/Enabled is set to true.
- NodePort Service/Advanced configuration/List of custom ports to expose:
Use these fields to specify an array of custom ports for a dedicated NodePort service that is
created for accessing the set of pods. Click Add List of custom ports to
expose to display a group of fields for the first port definition on the service. If you
need to expose more than one port for the NodePort service, use Add List of custom ports
to expose to configure additional port definitions.
When you complete the NodePort Service fields, the IBM App Connect Operator automatically creates a Kubernetes service with the defined ports and with a virtual IP address for access. The NodePort service is created in the same namespace as the integration runtime and is named in the format
integrationRuntimeName-np-ir.For more information about completing these fields, see the spec.nodePortService.ports[].* parameters in Custom resource values.
- Java/Java version: Use this field to specify which Java Runtime
Environment (JRE) version to use in the deployment and the runtime pods for the integration runtime.
This field is applicable only for a channel or version that resolves to 13.0.5.0-r1 or later.
Limitations apply to some message flow nodes and capabilities, and to some types of security configuration when certain JRE versions are used with IBM App Connect Enterprise. Use this field to specify a Java version that is compatible with the content of the Toolkit integration that is deployed as an integration runtime. Applications that are deployed to an integration runtime might fail to start if the Java version is incompatible.
Valid values are
8or17(the default). If you leave this field blank, the default value is used.For more information, see the following topics in the IBM App Connect Enterprise documentation:
- To use the Form view, ensure that Form view is selected and then complete the fields. Note that some fields might not be represented in the form.
- Click Create to start the deployment. An entry for the integration
runtime is shown in the IntegrationRuntimes table, initially with a
Pendingstatus. - Click the integration runtime name to view information about its definition and current status.
On the Details tab of the page, the Conditions section reveals the progress of the deployment. You can use the breadcrumb trail to return to the (previous)
Operator details
page for the App Connect Operator.When the deployment is complete, the status is shown as
Readyin the IntegrationRuntimes table.
Creating an instance from the Red Hat OpenShift or Kubernetes CLI
When you create an integration runtime, you can define which configurations you want to apply to the integration sever.
- If required, you can create configuration objects before you create an integration runtime and then add references to those objects while creating the integration sever. For information about how to use the CLI to create a configuration object before you create an integration runtime, see Creating a configuration object from the Red Hat OpenShift CLI.
- If you have existing configuration objects that you want the integration sever to reference, you can add those references while creating the integration runtime, as described in the steps that follow.
To create an integration runtime by using the Red Hat OpenShift or Kubernetes CLI, complete the following steps.
- From your local computer, create a YAML file that contains the configuration for the integration
runtime that you want to create. Include the metadata.namespace parameter to
identify the namespace in which you want to create the integration runtime; this should be the same
namespace where the other App Connect instances or resources are
created.
The following examples (Example 1 and Example 2) show an integration runtime CR for a standard Designer or Toolkit integration, with no requirement for OpenTelemetry tracing or serverless support.
Example 1:apiVersion: appconnect.ibm.com/v1beta1 kind: IntegrationRuntime metadata: name: customer-api namespace: ace-test spec: license: accept: true license: L-CKFT-S6CHZW use: CloudPakForIntegrationNonProduction template: spec: containers: - name: runtime resources: requests: cpu: 300m memory: 368Mi logFormat: basic barURL: - >- https://db-01-quickstart-dash.ace-test:3443/v1/directories/Customer_API?fbcea793-8eab-435f-8ba7-b97ee92cc0e4 configurations: - customer-api-salesforce-acct version: '13.0' replicas: 1
Example
2:apiVersion: appconnect.ibm.com/v1beta1 kind: IntegrationRuntime metadata: name: customer-api namespace: ace-test spec: license: accept: true license: L-CKFT-S6CHZW use: AppConnectEnterpriseProduction template: spec: containers: - name: runtime resources: requests: cpu: 300m memory: 368Mi logFormat: basic barURL: - >- https://db-01-quickstart-dash.ace-test:3443/v1/directories/Customer_API?fbcea793-8eab-435f-8ba7-b97ee92cc0e4 configurations: - customer-api-salesforce-acct version: '13.0' replicas: 1The following examples (Example 3 and Example 4) show an integration runtime CR with settings to enable OpenTelemetry tracing on a Toolkit integration, in an environment where Instana is configured as the Observability agent.
Example
3:apiVersion: appconnect.ibm.com/v1beta1 kind: IntegrationRuntime metadata: name: ir-01-quickstart-otel namespace: ace-test spec: license: accept: true license: L-CKFT-S6CHZW use: CloudPakForIntegrationNonProduction telemetry: tracing: openTelemetry: enabled: true template: spec: containers: - resources: requests: cpu: 300m memory: 368Mi name: runtime logFormat: basic barURL: - >- https://db-01-quickstart-dash.ace-test:3443/v1/directories/CustomerDatabaseV1?52370379-f412-463e-89bb-03bb56cd03b5 configurations: - my-odbc - my-setdbp version: '13.0' replicas: 1
Example
4:apiVersion: appconnect.ibm.com/v1beta1 kind: IntegrationRuntime metadata: name: ir-01-quickstart-otel namespace: ace-test spec: license: accept: true license: L-CKFT-S6CHZW use: AppConnectEnterpriseProduction telemetry: tracing: openTelemetry: enabled: true template: spec: containers: - resources: requests: cpu: 300m memory: 368Mi name: runtime logFormat: basic barURL: - >- https://db-01-quickstart-dash.ace-test:3443/v1/directories/CustomerDatabaseV1?52370379-f412-463e-89bb-03bb56cd03b5 configurations: - my-odbc - my-setdbp version: '13.0' replicas: 1The following examples (Example 5 and Example 6) show an integration runtime CR with settings to enable the serverless deployment of a BAR file that contains an API flow that was exported from a Designer instance. The BAR file is stored in a GitHub repository, so a configuration object of type
BarAuth, which contains credentials for connecting to GitHub, is required. Another configuration object of typeAccounts, which contains account details for connecting to the applications that are referenced in the Designer flow, is required.
Example 5:apiVersion: appconnect.ibm.com/v1beta1 kind: IntegrationRuntime metadata: name: test-apiflow-serverless namespace: ace-test spec: license: accept: true license: L-CKFT-S6CHZW use: CloudPakForIntegrationNonProductionFREE logFormat: json barURL: - >- https://raw.github.ibm.com/somedir/main/bars/Customer_API.bar configurations: - customer-api-salesforce-acct - barauth-config serverless: knativeService: enabled: true version: '13.0' replicas: 1
Example
6:apiVersion: appconnect.ibm.com/v1beta1 kind: IntegrationRuntime metadata: name: test-apiflow-serverless namespace: ace-test spec: license: accept: true license: L-CKFT-S6CHZW use: AppConnectEnterpriseNonProductionFREE logFormat: json barURL: - >- https://raw.github.ibm.com/somedir/main/bars/Customer_API.bar configurations: - customer-api-salesforce-acct - barauth-config serverless: knativeService: enabled: true version: '13.0' replicas: 1To see an example of an integration runtime CR with settings to enable ingress for an integration runtime in an IBM Cloud Kubernetes Service cluster, see Automatically creating ingress definitions for IBM App Connect instances in an IBM Cloud Kubernetes Service cluster.
- To view the full set of parameters and values that you can specify, see Custom resource values.
- For licensing information, see Licensing reference for IBM App Connect Operator.
- Specify the locations of one or more BAR files that you want to deploy. You can use the
spec.barURL parameter to either specify the URL to a BAR file that is stored in
the content server, or specify one or more BAR files in an external endpoint, as shown in the
following examples. If you are deploying BAR files that are stored in an external endpoint, you will
also need a configuration object of type
BarAuththat contains credentials for connecting to this endpoint. For more information, see Mechanisms for providing BAR files to an integration runtime.spec: barURL: - >- https://db-01-quickstart-dash.ace-tes:3443/v1/directories/CustomerDatabaseV1?873fe600-9ac6-4096-c00f-55e361fec2e5spec: barURL: - 'https://artifactory.com/myrepo/getHostAPI.bar' - 'https://artifactory.com/myrepo/CustomerDbV1.bar' - You can specify one or more (existing) configurations that you want to apply by using the
spec.configurations parameter. For example:
spec: configurations: - odbc-ini-dataor
spec: configurations: - odbc-ini-data - accountsdataor
spec: configurations: ["odbc-ini-data", "accountsdata"]For the spec.configurations values, specify the metadata.name values for the relevant configuration objects.
Note:If this integration runtime contains a callable flow, you must configure the integration runtime to use a switch server that you created earlier. For information about how to create a switch server, see App Connect Switch Server reference. Locate the name of the switch server and then use it to configure the integration runtime as follows:
- From the command line, run the following command, where switchName is the
metadata.name value that was specified while creating the switch server:
oc get switchserver switchName
kubectl get switchserver switchName - Make a note of the
AGENTCONFIGURATIONNAMEvalue that is shown in the output.NAME RESOLVEDVERSION CUSTOMIMAGES STATUS AGENTCONFIGURATIONNAME AGE default 13.0.6.0-r1 false Ready default-agentx 1h - Add the
AGENTCONFIGURATIONNAMEvalue to the spec.configurations parameter; for example:configurations: - default-agentx
A configuration object of type
REST Admin SSL files(oradminssl) is created and applied by default to the integration runtime to provide self-signed TLS certificates for secure communication between the App Connect Dashboard and the runtime. This configuration object is created from a predefined ZIP archive, which contains a set of PEM files named ca.crt.pem, tls.crt.pem, and tls.key.pem. A secret is also auto generated to store the Base64-encoded content of this ZIP file. When you deploy the integration runtime, the configuration name is stored in spec.configurations asintegrationRuntimeName-ir-adminssl, where integrationRuntimeName is the metadata.name value for the integration runtime. For more information about this configuration type, see REST Admin SSL files type. - From the command line, run the following command, where switchName is the
metadata.name value that was specified while creating the switch server:
- If you are deploying an integration runtime for a Toolkit integration and want to configure OpenTelemetry tracing for all message flows, you can use the
spec.telemetry.tracing.openTelemetry.* parameters to enable OpenTelemetry tracing and configure your preferred settings. An
example of the standard settings is as follows:
spec: telemetry: tracing: openTelemetry: tls: secretName: mycert-secret caCertificate: ca.crt enabled: true protocol: grpc endpoint: 'status.hostIP:4317'Note:- If you are using Instana as your Observability
agent, setting
spec.telemetry.tracing.openTelemetry.enabled to
trueis typically the only configuration needed for OpenTelemetry tracing, and you do not need to configure any other settings. - In an Cloud Pak for Integration environment with Instana configured, one Instana agent typically runs on each physical worker node in the cluster by default. In this scenario, it is advisable to leave spec.telemetry.tracing.openTelemetry.endpoint unspecified when OpenTelemetry tracing is enabled. This results in the container being configured to use the Instana agent that is on the physical worker where the container is started. (In most cases, the agent will be available locally on the worker node where App Connect is running.) If preferred, you can use spec.telemetry.tracing.openTelemetry.endpoint to specify a different IP address and port for the agent (on a different physical worker node).
- You can configure additional OpenTelemetry properties by using
a server.conf.yaml file, which contains an additional set of OpenTelemetry properties. To configure these additional properties,
use the server.conf.yaml file to create a configuration object of type
server.conf.yamlthat can be applied to the integration runtime. For more information, see Configuring additional OpenTelemetry properties by using the server.conf.yaml file.
- If you are using Instana as your Observability
agent, setting
spec.telemetry.tracing.openTelemetry.enabled to
- If you want to create a Knative serverless deployment for an
integration runtime that contains a Designer API flow on Red Hat
OpenShift,
you can enable Knative Serving by setting the
spec.serverless.knativeService.enabled parameter to
true. For example:spec: serverless: knativeService: enabled: trueKnative Serving must be installed and configured in your cluster. For more information, see Configuring Knative serverless support on Red Hat OpenShift and Installing and configuring Knative Serving in your Red Hat OpenShift cluster.
- Use the spec.desiredRunState parameter to specify whether to stop or start the integration runtime. The stop or start action is applied to all replica pods that are provisioned for the integration runtime to ensure that a consistent state is maintained across the pods.
- Use the spec.template.spec.topologySpreadConstraint[].* parameters to add one or more topology spread constraints that control how to distribute or spread pods across topological domains such as zones, nodes, or regions in a multi-zone or multi-node cluster. You can use these settings to configure high availability and fault tolerance by spreading workloads evenly across domains.
- Use the spec.ingress.enabled parameter to indicate whether to automatically create ingress resources for your deployed integration runtime. Use the spec.ingress.domain parameter to specify a custom subdomain to include in the ingress routes that are generated to expose your IBM App Connect instances to external traffic. These parameters are applicable only for an IBM Cloud Kubernetes Service environment.
- Use the spec.nodePortService.ports[].* parameters to specify an array of
custom ports for a dedicated NodePort service that is created for accessing the set of pods. If you
need to expose more than one port for the NodePort service, you can configure multiple port
definitions as an array.
When you complete the spec.nodePortService.ports[].* parameters, the IBM App Connect Operator automatically creates a Kubernetes service with the defined ports and with a virtual IP address for access. The NodePort service is created in the same namespace as the integration runtime and is named in the format
integrationRuntimeName-np-ir.For more information about completing these parameters, see Custom resource values.
- Use the spec.java.version parameter to specify which Java Runtime Environment (JRE) version to use in the deployment and the runtime pods for the integration runtime.
You can also choose to define the configurations that you want to apply to the integration runtime within the same YAML file that contains the integration runtime configuration.
If preferred, you can define multiple configurations and integration runtimes within the same YAML file. Each definition can be separated with three hyphens (
---) as shown in the following example. The configurations and integration runtimes will be created independently, but any configurations that you specify for an integration runtime will be applied during deployment. (In the following example, settings are defined for a new configuration and an integration runtime. The integration runtime's spec.configurations setting references the new configuration and an existing configuration that should be applied during deployment.)apiVersion: appconnect.ibm.com/v1beta1 kind: Configuration metadata: name: setdbp-conf namespace: mynamespace spec: data: ABCDefghIJLOMNehorewirpewpTEV843BCDefghIJLOMNorewirIJLOMNeh842lkalkkrmwo4tkjlfgBCDefghIJLOMNehhIJLOM type: setdbparms --- apiVersion: appconnect.ibm.com/v1beta1 kind: IntegrationRuntime metadata: name: customerapi namespace: mynamespace spec: license: accept: true license: L-CKFT-S6CHZW use: CloudPakForIntegrationNonProduction template: spec: containers: - name: runtime resources: requests: cpu: 300m memory: 368Mi logFormat: basic barURL: - >- https://db-01-quickstart-dash.mynamespace:3443/v1/directories/CustomerDatabaseV1?12345678-abf5-491d-be0e-219abcde2338 configurations: ["setdbp-conf", "my-accounts"] version: ''13.0'' replicas: 2 - Save this file with a .yaml extension; for example, customerapi_cr.yaml.
- From the command line, log in to your cluster by using the oc login command or the relevant command for your Kubernetes environment.
- Run the following command to create the integration runtime and apply any defined
configurations. (Use the name of the .yaml file that you
created.)
oc apply -f customerapi_cr.yaml
kubectl apply -f customerapi_cr.yaml - Run the following command to check the status of the integration runtime and verify that it is
running:
oc get integrationruntimes -n namespace
kubectl get integrationruntimes -n namespaceYou should also be able to view this integration runtime in your App Connect Dashboard instance.
Note: If you are working in a Kubernetes environment other than IBM Cloud Kubernetes Service, ensure that you create an ingress definition after you create this instance, to make its internal service publicly available. For more information, see Manually creating ingress definitions for external access to your IBM App Connect instances in Kubernetes environments.If you are using IBM Cloud Kubernetes Service, you can use the spec.ingress.enabled parameter to enable ingress for this instance to automatically create the required ingress resources. For more information, see Automatically creating ingress definitions for external access to your IBM App Connect instances on IBM Cloud Kubernetes Service.
Updating the custom resource settings for an instance
If you want to change the settings of an existing integration runtime, you can edit its custom resource settings by using the Red Hat OpenShift web console or CLI, or the CLI for a Kubernetes environment. For example, you might want to adjust CPU or memory requests or limits for use within the containers in the deployment.
Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).
Updating an instance from the Red Hat OpenShift web console
To update an integration runtime by using the Red Hat OpenShift web console, complete the following steps:
- From
a browser window, log in to the Red Hat
OpenShift Container Platform web console. Ensure that you
are in the Administrator perspective
. - From the navigation, click .
- If required, select the namespace (project) in which you installed the IBM App Connect Operator.
- From the Installed Operators page, click IBM App Connect.
- From the
Operator details
page for the App Connect Operator, click the Integration Runtime tab. - Locate and click the name of the integration runtime that you want to update.
- Click the YAML tab.
- Update the content of the YAML editor as required. You can update existing values, add new entries, or delete entries. For information about the available parameters and their values, see Custom resource values.
- Click Save to save your changes.
Updating an instance from the Red Hat OpenShift or Kubernetes CLI
To update an integration runtime from the Red Hat OpenShift or Kubernetes CLI, complete the following steps.
- From the command line, log in to your cluster by using the oc login command or the relevant command for your Kubernetes environment.
- From the namespace where the integration runtime is deployed, run the oc edit
or kubectl edit command to partially update the instance, where
instanceName is the name (metadata.name value) of the
instance.
oc edit integrationruntime instanceName
kubectl edit integrationruntime instanceNameThe integration runtime CR automatically opens in the default text editor for your operating system.
- Update the contents of the file as required. You can update existing values, add new entries, or delete entries. For information about the available parameters and their values, see Custom resource values.
- Save the YAML definition and close the text editor to apply the changes.
If preferred, you can also use the oc patch or kubectl patch command to apply a patch with some bash shell features, or use oc apply or kubectl apply with the appropriate YAML settings.
For example, you can save the YAML settings to a file with a .yaml extension (for example, updatesettings.yaml), and then run oc patch or kubectl patch as follows to update the settings for an instance:
oc patch integrationruntime instanceName --type='merge' --patch "$(cat updatesettings.yaml)"
kubectl patch integrationruntime instanceName --type='merge' --patch "$(cat updatesettings.yaml)"
Deleting an instance
If no longer required, you can delete an integration runtime by using the Red Hat OpenShift web console or CLI, or the CLI for a Kubernetes environment.
Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).
Deleting an instance from the Red Hat OpenShift web console
To delete an integration runtime by using the Red Hat OpenShift web console, complete the following steps:
- From
a browser window, log in to the Red Hat
OpenShift Container Platform web console. Ensure that you
are in the Administrator perspective
. - From the navigation, click .
- If required, select the namespace (project) in which you installed the IBM App Connect Operator.
- From the Installed Operators page, click IBM App Connect.
- From the
Operator details
page for the App Connect Operator, click the Integration Runtime tab. - Locate the instance that you want to delete.
- Click the options icon (
) to open the options menu, and then click the Delete
option. - Confirm the deletion.
Deleting an instance from the Red Hat OpenShift or Kubernetes CLI
To delete an integration runtime by using the Red Hat OpenShift or Kubernetes CLI, complete the following steps.
- From the command line, log in to your cluster by using the oc login command or the relevant command for your Kubernetes environment.
- From the namespace where the integration runtime instance is deployed, run the following command
to delete the instance, where instanceName is the value of the
metadata.name parameter.
oc delete integrationruntime instanceName
kubectl delete integrationruntime instanceName
Custom resource values
The following table lists the configurable parameters and default values for the custom resource.
[] depicts an array. For example, the
following notation indicates that an array of custom ports can be specified (for a service):
spec.service.ports[].fieldName. When used together with
spec.service.type, you can specify multiple port definitions as shown in the
following example: spec:
service:
ports:
- protocol: TCP
name: config-abc
nodePort: 32000
port: 9910
targetPort: 9920
- protocol: SCTP
name: config-xyz
nodePort: 31500
port: 9376
targetPort: 9999
type: NodePort| Parameter | Description | Default |
|---|---|---|
|
apiVersion |
The API version that identifies which schema is used for this integration runtime. |
appconnect.ibm.com/v1beta1 |
|
kind |
The resource type. |
IntegrationRuntime |
|
metadata.name |
A unique short name by which the integration runtime can be identified. |
|
|
metadata.namespace |
The namespace (project) in which the integration runtime is deployed. The namespace in which you create an instance or object must be no more than 40 characters in length. |
|
|
spec.barURL |
Identifies the location of one or more BAR files that can be deployed to the integration runtime. Can be either of these values:
If you want to use a custom server runtime image to deploy an integration runtime, use spec.template.spec.containers.image to specify this image. |
|
|
spec.configurations[] |
An array of existing configurations that you want to apply to one or more BAR files being deployed. These configurations must be in the same namespace as the integration runtime. To specify a configuration, use the metadata.name value that was specified while creating that configuration. For information about creating configurations, see Configuration reference. To see examples of how to specify one or more values for spec.configurations, see Creating an instance from the Red Hat OpenShift web console and Creating an instance from the Red Hat OpenShift or Kubernetes CLI. |
|
|
spec.dashboardUsers.bypassGenerate |
Indicates whether to bypass the generation of users when not using the App Connect Dashboard. Valid values are true and false.
|
false |
|
spec.defaultAppName |
A name for the default application for the deployment of independent resources. |
DefaultApplication |
|
spec.defaultNetworkPolicy.enabled (Only applicable if spec.version resolves to 12.0.7.0-r2 or later) |
Indicate whether to enable the creation of a default network policy that restricts traffic to the integration runtime pods. Valid values are true and false.
For more information, see About network policy in the Red Hat OpenShift documentation, or Network Policies in the Kubernetes documentation. |
true |
|
spec.desiredRunState (Only applicable if spec.version resolves to 13.0.2.2-r1 or later) |
Specify whether to stop or start the integration runtime. The stop or start action is applied to all replica pods that are provisioned for the integration runtime to ensure that a consistent state is maintained across the pods. Valid values are as follows:
If you omit this parameter or leave its value blank, a default value of You can also stop or start the integration runtime from its tile on the Runtimes page in your App Connect Dashboard instance. For more information, see Stopping and starting a deployed integration runtime. |
running |
|
spec.flowType.designerAPIFlow |
Indicate whether to enable the runtime for API flows that are authored in App Connect Designer. Valid values are true and false. Note:
|
false |
|
spec.flowType.designerEventFlow |
Indicate whether to enable the runtime for event-driven flows that are authored in App Connect Designer. Valid values are true and false. Note:
|
false |
|
spec.flowType.toolkitFlow |
Indicate whether to enable the runtime for flows that are built in the IBM App Connect Enterprise Toolkit. Valid values are true and false. Note:
|
false |
|
spec.forceFlowBasicAuth.enabled |
Indicate whether to enable basic authentication on all HTTP input flows. Valid values are true and false. Tip: Applicable to 12.0.8.0-r2 or later: After you deploy the integration
runtime, you can alternatively enable or disable basic authentication as follows:
For more information, see Creating an integration runtime to run your BAR file resources. |
false |
|
spec.forceFlowBasicAuth.secretName |
Specify the name of a manually created secret that stores the basic authentication credentials (that is, a username and password) to use on all HTTP input flows. Alternatively, omit this setting to use an automatically generated secret. This secret is required if spec.forceFlowBasicAuth.enabled is set to true. If you do not specify a value, a secret named
If you want to provide your own secret, you must create it in the namespace where the integration runtime will be deployed. You can do so from the Red Hat OpenShift web console, or from the Red Hat OpenShift or Kubernetes CLI.
|
|
|
spec.forceFlowsHTTPS.enabled |
Indicate whether to force all HTTP Input nodes and SOAP Input nodes in all deployed flows (including their usage for inbound connections to applications, REST APIs, and integration services) in the integration runtime to use Transport Layer Security (TLS). Valid values are true and false. When spec.forceFlowsHTTPS.enabled is set to true, you must also ensure that spec.restApiHTTPS.enabled is set to true. |
false |
|
spec.forceFlowsHTTPS.secretName |
Specify the name of a secret that stores a user-supplied public certificate/private key pair to use for enforcing TLS. (You can use tools such as keytool or OpenSSL to generate the certificate and key if required, but do not need to apply password protection.) This secret is required if spec.forceFlowsHTTPS.enabled is set to true. You must create the secret in the namespace where the integration
runtime will be deployed, and can do so from the Red Hat
OpenShift web
console, or from the Red Hat
OpenShift or Kubernetes CLI. Use your preferred method to create the secret. For example, you can use the following
Secret (YAML) resource to create the secret from the web console (by using the Import
YAML icon
Or you can create the secret by running the following command from the
required
namespace:
![]() ![]() Note:
When you create the integration runtime, the IBM App Connect Operator checks for the certificate and key in the secret and adds them to a generated keystore that is protected with a password. The endpoint of the deployed integration is then secured with this certificate and key. If the secret can't be found in the namespace, the integration runtime will fail after 10 minutes. If you need to update the certificate and key that are stored in the secret, you can edit the Secret resource to update the tls.crt and tls.key values. When you save, the keystore is regenerated and used by the integration runtime without the need for a restart. |
|
|
spec.ingress.enabled (Only applicable in an IBM Cloud Kubernetes Service environment and if spec.version resolves to 13.0.2.1-r1 or later) |
Indicate whether to automatically create ingress resources for your deployed integration runtime. In Kubernetes environments, ingress resources are required to expose your integration runtimes to external traffic. Valid values are true and false.
|
false |
|
spec.ingress.domain (Only applicable in an IBM Cloud Kubernetes Service environment and if spec.version resolves to 13.0.2.1-r1 or later) |
If you do not want the ingress routes for the integration runtime to be constructed with the IBM-provided ingress subdomain of your IBM Cloud Kubernetes Service cluster, specify a preferred custom subdomain that is created in the cluster. This parameter is applicable only if spec.ingress.enabled is set to
|
|
|
spec.java.version (Only applicable if spec.version resolves to 13.0.5.0-r1 or later) |
Specify which Java Runtime Environment (JRE) version to use in the deployment and the runtime pods for the integration runtime. Limitations apply to some message flow nodes and capabilities, and to some types of security configuration when certain JRE versions are used with IBM App Connect Enterprise. Use this parameter to specify a Java version that is compatible with the content of the Toolkit integration that is deployed as an integration runtime. Applications that are deployed to an integration runtime might fail to start if the Java version is incompatible. Valid values are Example:
For more information, see the following topics in the IBM App Connect Enterprise documentation:
|
17 |
|
spec.license.accept |
An indication of whether the license should be accepted. Valid values are true and false. To install, this value must be set to true. |
false |
|
spec.license.license |
See Licensing reference for IBM App Connect Operator for the valid values. |
|
|
spec.license.use |
See Licensing reference for IBM App Connect Operator for the valid values. For more information about specifying this value on Kubernetes, see Using Cloud Pak for Integration licenses with integration runtimes in a Kubernetes environment. |
|
|
spec.logFormat |
The format used for the container logs that are output to the container's console. Valid values are basic and json. |
basic |
|
spec.metrics.disabled (Only applicable if spec.version resolves to 12.0.10.0-r2 or later) |
Indicate whether to disable the generation of message flow statistics, accounting data, and resource statistics for the deployed integration. Valid values are true and false.
If you are working in a Kubernetes environment, ensure that
spec.metrics.disabled is set to |
false |
|
spec.nodePortService.ports[].name (Only applicable if spec.version resolves to 13.0.3.0-r1 or later) |
The name of a port definition on a dedicated NodePort service that is created to expose the integration runtime pods to external (as well as internal) traffic. The name must contain only lowercase alphanumeric characters and a hyphen (-), and begin and end with an alphanumeric character. If you need to expose more than one port for the NodePort service, you can use the collection of spec.nodePortService.ports[].* parameters to configure multiple port definitions as an array. When you specify the set of spec.nodePortService.ports[].* parameters, the
IBM App Connect Operator automatically creates a Kubernetes service with the defined ports and with a virtual IP address for
access. The NodePort service is created in the same namespace as the integration runtime and is
named in the format The following example shows the set of fields in a port definition:
Note: The spec.nodePortService.ports[].* parameters are an enhancement to using
these options to configure port definitions:
This enhancement simplifies the process of managing external access to TCP/IP Server nodes in Toolkit flows because only the specified TCP/IP ports are exposed externally. Internal ports are not affected and no services need to be manually managed. Applicable for integration runtimes at version 13.0.3.1-r1 or later, running on App Connect Operator 12.12.0 or later:
|
|
|
spec.nodePortService.ports[].nodePort (Only applicable if spec.version resolves to 13.0.3.0-r1 or later) |
The port on which each node in the cluster listens for incoming requests for the NodePort service. The incoming traffic is routed to the corresponding pods. The port number must be in the range 30000 to 32767. Ensure that this port is not being used by another service. You can check which node ports are already in use by running the following command and then checking under the PORT(S) column in the output:
You can either specify a specific port number, or omit
spec.nodePortService.ports[].nodePort to automatically assign an available port
in the 30000-32767 range. To find the port that is automatically assigned, run the oc
describe service or kubectl describe service command (as shown in the
following example), or run the oc get service or kubectl get
service command, where
Applicable for integration runtimes at version 13.0.3.1-r1 or later, running on App Connect Operator 12.12.0 or later:
If you choose to automatically assign a port number (by omitting the spec.nodePortService.ports[].nodePort setting), the Operator automatically updates the integration runtime CR with a status.nodePortService.ports.nodePort setting that exposes the assigned port number. For example:
|
|
|
spec.nodePortService.ports[].port (Only applicable if spec.version resolves to 13.0.3.0-r1 or later) |
The port that exposes the NodePort service to pods within the cluster. |
|
|
spec.nodePortService.ports[].protocol (Only applicable if spec.version resolves to 13.0.3.0-r1 or later) |
The protocol of the port. Valid values are |
|
|
spec.nodePortService.ports[].targetPort (Only applicable if spec.version resolves to 13.0.3.0-r1 or later) |
The port on which the pods listen for connections from the NodePort service. The port number must be in the range 1 to 65535. |
|
|
spec.replicas |
The number of replica pods to run for each deployment. Increasing the number of replicas will proportionally increase the resource requirements. |
1 |
|
spec.restApiHTTPS.enabled |
Indicate whether to enable HTTPS-based REST API flows, which in turn ensures that the correct endpoint is configured for the deployed integration. Valid values are true and false.
If using a Kubernetes environment other than IBM Cloud Kubernetes Service, this setting is ignored. Instead, the protocol that is defined in the ingress definition for this integration runtime, which you will need to create later, will be used. For more information, see Manually creating ingress definitions for external access to your IBM App Connect instances in Kubernetes environments. In an IBM Cloud Kubernetes Service cluster, this setting is applicable only for a 13.0.2.1-r1 or later integration runtime if Ingress/Enabled is set to true to automatically create ingress resources for the deployed integration runtime. For more information, see Automatically creating ingress definitions for external access to your IBM App Connect instances on IBM Cloud Kubernetes Service. |
false |
|
spec.routes.disabled |
Valid values are true and false.
|
false |
|
spec.routes.domain |
The path is constructed as follows, where CRname is the metadata.name value, CRnamespace is the metadata.namespace value, and specRoutesDomainValue is the spec.routes.domain value:
After a route is created, the spec.routes.domain setting cannot be changed. Note:
On OpenShift, routers will typically use the oldest route with a given host when resolving conflicts. |
|
|
spec.routes.metadata.annotations |
|
|
|
spec.routes.timeout |
|
|
|
spec.serverless.knativeService.enabled |
Indicate whether to enable Knative service to provide support for serverless deployments. Valid values are true and false. Set this value to true to enable a serverless Knative service deployment. Restriction: When set to
true, the following restrictions apply for the
integration runtime:
|
false |
|
spec.serverless.knativeService.imagePullSecrets.name |
The name of a secret that contains the credentials for pulling images from the registry where they are stored. For more information, see Deploying images from a private container registry. |
|
|
spec.serverless.knativeService.template.spec.containerConcurrency |
The maximum number of concurrent requests that are allowed per container of the revision. The default of 0 (zero) indicates that concurrency to the application is not limited, and the system decides the target concurrency for the autoscaler. |
0 |
|
spec.serverless.knativeService.template.spec.containers.image |
The path for the image to run in this container, including the tag. |
|
|
spec.serverless.knativeService.template.spec.containers.imagePullPolicy |
Indicate whether you want images to be pulled every time, never, or only if they're not present. Valid values are Always, Never, and IfNotPresent. |
IfNotPresent |
|
spec.serverless.knativeService.template.spec.containers.name |
The name of the container to be configured.
|
|
|
spec.serverless.knativeService.template.spec.priority (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
Pod priority settings control which pods get killed, rescheduled, or started to allow the most important pods to keep running. spec.serverless.knativeService.template.spec.priority specifies an integer value, which various system components use to identify the priority of the (serverless) integration runtime pod. The higher the value, the higher the priority. If the priority admission controller is enabled, you cannot manually specify a priority value because the admission controller automatically uses the spec.serverless.knativeService.template.spec.priorityClassName setting to populate this field with a resolved value. |
|
|
spec.serverless.knativeService.template.spec.priorityClassName (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
A priority class name that maps to the integer value of a pod priority in spec.serverless.knativeService.template.spec.priority. If specified, this class name indicates the pod's priority (or importance) relative to other pods. Valid values are as follows:
If you do not specify a class name, the priority is set as follows:
|
|
|
spec.serverless.knativeService.template.spec.timeoutSeconds |
The maximum duration in seconds that the request routing layer will wait for a request delivered to a container to begin replying (send network traffic). If unspecified, a system default will be provided. |
|
|
spec.serverless.knativeService.template.spec.tolerations[].effect (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
To prevent pods from being scheduled onto inappropriate nodes, use taints together with tolerations. Tolerations allow scheduling, but don't guarantee scheduling because the scheduler also evaluates other parameters as part of its function. Apply one or more taints to a node (by running oc taint or kubectl taint with a key, value, and taint effect) to indicate that the node should repel any pods that do not tolerate the taints. Then, apply toleration settings (effect, key, operator, toleration period, and value) to a pod to allow it to be scheduled on the node if the pod's toleration matches the node's taint. For more information, see Taints and Tolerations in the Kubernetes documentation. If you need to specify one or more tolerations for a serverless integration runtime pod, you can use the collection of spec.serverless.knativeService.template.spec.tolerations[].* parameters to define an array. For spec.serverless.knativeService.template.spec.tolerations[].effect,
specify the taint effect that the toleration should match. (The taint effect on a node determines
how that node reacts to a pod that is not configured with appropriate tolerations.) Leave the effect
empty to match all taint effects. Alternatively, specify one of these values:
|
|
|
spec.serverless.knativeService.template.spec.tolerations[].key (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
Specify the taint key that the toleration applies to. Leave the key empty and set
spec.serverless.knativeService.template.spec.tolerations[].operator to
|
|
|
spec.serverless.knativeService.template.spec.tolerations[].operator (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
Specify an operator that represents a key's relationship to the value in
spec.serverless.knativeService.template.spec.tolerations[].value. Valid
operators are |
Equal |
|
spec.serverless.knativeService.template.spec.tolerations[].tolerationSeconds (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
Optionally specify a period of time in seconds that determines how long the pod stays bound to a
node with a matching taint before being evicted. Applicable only for a toleration with a
By default, no value is set, which means that a pod that tolerates the taint will never be evicted. Zero and negative values are treated as 0 (evict immediately). |
|
|
spec.serverless.knativeService.template.spec.tolerations[].value (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
Specify the taint value that the toleration matches to. If the operator is
|
|
|
spec.service.ports[].name |
The name of a port definition on the service (defined by spec.service.type), which is created for accessing the set of pods. The name must contain only lowercase alphanumeric characters and a hyphen (-), and begin and end with an alphanumeric character. If you need to expose more than one port for the service, you can use the collection of spec.service.ports[].* parameters to configure multiple port definitions as an array. |
|
|
spec.service.ports[].nodePort |
The port on which each node listens for incoming requests for the service. Applicable when
spec.service.type is set to The port number must be in the range 30000 to 32767. Ensure that this port is not being used by another service. You can check which node ports are already in use by running the following command and then checking under the PORT(S) column in the output:
|
|
|
spec.service.ports[].port |
The port that exposes the service to pods within the cluster. |
|
|
spec.service.ports[].protocol |
The protocol of the port. Valid values are |
|
|
spec.service.ports[].targetPort |
The port on which the pods will listen for connections from the service. The port number must be in the range 1 to 65535. |
|
|
spec.service.type |
The type of service to create for accessing the set of pods:
Tip: Instead of using the spec.service.ports[].* parameters to
modify the default
integrationRuntimeName-ir service to expose
ports externally, you can use the spec.nodePortService.ports[].* parameters to
set up port definitions for accessing the pods. When you specify
spec.nodePortService.ports[].* settings, the IBM App Connect Operator automatically creates a dedicated NodePort service with
your defined ports to expose the integration runtime pods to external traffic. |
ClusterIP |
|
spec.startupResources.* (Only applicable if spec.version resolves to 13.0.5.1-r1 or later) |
Specify CPU resources for the startup phase of the integration runtime to optimize container startup performance without increasing licensing costs. This feature allows you to allocate higher CPU resources during the container startup phase and then dynamically reduce them to lower values for the running (licensed) phase. For example, you might want to improve the initialization time for an integration runtime that contains a large number of flows. During startup, the IBM License Service does not count the container as running, so no VPC usage occurs until the pod is fully ready. You can specify the following parameters:
Restriction:
The following example allocates 2000m CPU during container creation. After the App Connect processes are initialized, the IBM App Connect Operator automatically scales down the pod to the default or
custom CPU values that are specified in
spec.template.spec.containers[].resources.limits.cpu and
spec.template.spec.containers[].resources.requests.cpu for the
For more information, see Dynamic CPU Allocation for Faster Startup in IntegrationRuntimes. |
|
|
spec.strategy.type |
The update strategy for replacing old pods on always-on deployments:
|
RollingUpdate |
|
spec.telemetry.tracing.openTelemetry.enabled |
Indicate whether to enable OpenTelemetry tracing for all message flows in a Toolkit integration that you want to deploy to the integration runtime. Valid values are true and false. Note: If you are using Instana as your Observability agent or
backend system, setting this value to true is typically the only
configuration needed for OpenTelemetry tracing.
|
false |
|
spec.telemetry.tracing.openTelemetry.endpoint |
An endpoint of the OpenTelemetry agent that will receive the
OpenTelemetry span data. The default is
Note:
In an Cloud Pak for Integration environment with Instana configured, one Instana agent typically runs on each physical worker node in the cluster by default. In this scenario, it is advisable to leave spec.telemetry.tracing.openTelemetry.endpoint unspecified when OpenTelemetry tracing is enabled. This blank setting results in the container being configured to use the Instana agent that is on the physical worker where the container is started. (In most cases, the agent will be available locally on the worker node where App Connect is running.) If preferred, you can specify a different IP address and port for the agent (on a different physical worker node) in spec.telemetry.tracing.openTelemetry.endpoint. |
status.hostIP:4317 |
|
spec.telemetry.tracing.openTelemetry.protocol |
An OpenTelemetry Protocol (OTLP) to use to send data to your Observability system:
|
grpc |
|
spec.telemetry.tracing.openTelemetry.timeout |
Specify a connection timeout period in seconds. |
10s |
|
spec.telemetry.tracing.openTelemetry.tls.caCertificate |
For secure communication, specify the name of the secret key that contains the certificate authority (CA) public certificate. |
ca.crt |
|
spec.telemetry.tracing.openTelemetry.tls.secretName |
For secure communication, specify the name of the secret that contains the certificate authority (CA) public certificate. |
|
|
spec.template.spec.affinity |
Specify custom affinity settings that control the placement of pods on nodes. The default settings allow the pod to be placed on a supported platform and, where possible, spread the replicas across hosts. The default settings are as follows. Note that the labelSelector entries are automatically generated.
You can overwrite the default settings for spec.template.spec.affinity.nodeAffinity or spec.template.spec.affinity.podAntiAffinity with custom settings. Note: The default affinity settings are available for integration runtimes at version 12.0.12.5-r1
or later.
Custom settings are supported for nodeAffinity, podAffinity, and podAntiAffinity. For more information about spec.template.spec.affinity.nodeAffinity definitions, see Controlling pod placement on nodes using node affinity rules and Placing pods relative to other pods using affinity and anti-affinity rules in the OpenShift documentation, and Assigning Pods to Nodes in the Kubernetes documentation. |
|
|
spec.template.spec.containers[].env |
Define custom environment variables that you want to set for a specific App Connect container in the deployment by directly specifying a name and
value for each environment variable. For example, you can set a container's timezone by declaring a
The spec.template.spec.containers[].env parameter exposes the Kubernetes API for declaring environment variables in the container, and as such follows the same schema. For any named container, you can set one or more environment variables in either of the following ways:
The following example specifies name/value pairs for two environment variables for the
The following example shows the fields that you can complete to specify a custom environment
variable name for the
For more information, see the following information in the Kubernetes documentation:
You can also set environment variables by using the spec.template.spec.containers[].envFrom parameter. |
|
|
spec.template.spec.containers[].envFrom (Only applicable if spec.version resolves to 12.0.12.4-r1 or later) |
Define custom environment variables that you want to set for a specific App Connect container in the deployment by referencing one or more ConfigMaps or secrets. All of the key/value pairs in a referenced ConfigMap or secret are set as environment variables for the named container. The spec.template.spec.containers[].envFrom parameter exposes the Kubernetes API for declaring environment variables in the container, and as such follows the same schema. For any named container, you can set one or more environment variables by specifying an array of ConfigMaps or secrets whose environment variables you want to inject into the container. You can optionally specify a string value to prepend to each key in a ConfigMap and also indicate whether the ConfigMap or secret must be defined. The following example shows the fields that you can complete to set environment variables for the
For more information, see the following information in the Kubernetes documentation:
You can also set environment variables by using the spec.template.spec.containers[].env parameter. |
|
|
spec.template.spec.containers[].name |
The name of a container that is created in the pod when the integration runtime is deployed, and which you want to configure. The name that you specify must be valid for the type of integration being deployed:
For each container array, you can configure your preferred custom settings for the spec.template.spec.containers[].image, spec.template.spec.containers[].imagePullPolicy, spec.template.spec.containers[].livenessProbe.*, spec.template.spec.containers[].readinessProbe.*, spec.template.spec.containers[].resources., and spec.template.spec.containers[].startupProbe.* parameters. |
|
|
spec.template.spec.containers[].image |
The name of the custom server image to use; for example,
|
|
|
spec.template.spec.containers[].imagePullPolicy |
Indicate whether you want images to be pulled every time, never, or only if they're not present. Valid values are Always, Never, and IfNotPresent. |
IfNotPresent |
|
spec.template.spec.containers[].lifecycle.postStart.exec.command[] (Only applicable if spec.version resolves to 12.0.7.0-r5 or later) |
An array of (one or more) commands to execute immediately after a The working directory for the command is the root ('/') in the container's file system. The command executes without being run in a shell, which means that traditional shell instructions ('|', etc) will not work. To use a shell, explicitly call out to that shell. An exit status of 0 (zero) indicates a live or healthy status, and a non-zero value indicates an unhealthy status. For more information, see Container Lifecycle Hooks and Attach Handlers to Container Lifecycle Events in the Kubernetes documentation. |
|
|
spec.template.spec.containers[].lifecycle.postStart.httpGet.host (Only applicable if spec.version resolves to 12.0.7.0-r5 or later) |
Specify the host name to connect to, to perform the HTTP request on the container pod immediately
after it starts. Defaults to the pod IP. You can alternatively set |
|
|
spec.template.spec.containers[].lifecycle.postStart.httpGet.httpHeaders (Only applicable if spec.version resolves to 12.0.7.0-r5 or later) |
Specify one or more custom headers to set in the HTTP request to be performed on the container pod immediately after it starts. For each header, specify a header field name and header field value in the following format:
|
|
|
spec.template.spec.containers[].lifecycle.postStart.httpGet.path (Only applicable if spec.version resolves to 12.0.7.0-r5 or later) |
Specify the path to access on the HTTP server when performing the HTTP request on the container pod immediately after it starts. |
|
|
spec.template.spec.containers[].lifecycle.postStart.httpGet.scheme (Only applicable if spec.version resolves to 12.0.7.0-r5 or later) |
Specify the scheme to use for connecting to the host when performing the HTTP request on the
|
HTTP |
|
spec.template.spec.containers[].lifecycle.preStop.exec.command[] (Only applicable if spec.version resolves to 12.0.7.0-r5 or later) |
An array of (one or more) commands to execute inside a Use the spec.template.spec.containers[].lifecycle.preStop.* settings to configure the lifecycle of the container to allow existing transactions to complete before the pod is terminated due to an API request or a management event (such as failure of a liveness or startup probe, or preemption). This allows rolling updates to occur without breaking transactions (unless they are long running). The countdown for the pod's termination grace period begins before the preStop hook is executed. The working directory for the command is the root ('/') in the container's file system. The command executes without being run in a shell, which means that traditional shell instructions ('|', etc) will not work. To use a shell, explicitly call out to that shell. An exit status of 0 (zero) indicates a live or healthy status, and a non-zero value indicates an unhealthy status. For example, this default preStop setting ensures that rolling updates do not result in lost messages on the runtime:
For more information, see Container Lifecycle Hooks and Attach Handlers to Container Lifecycle Events in the Kubernetes documentation. |
|
|
spec.template.spec.containers[].lifecycle.preStop.httpGet.host (Only applicable if spec.version resolves to 12.0.7.0-r5 or later) |
Specify the host name to connect to, to perform the HTTP request on the container pod before its
termination. Defaults to the pod IP. You can alternatively set |
|
|
spec.template.spec.containers[].lifecycle.preStop.httpGet.httpHeaders (Only applicable if spec.version resolves to 12.0.7.0-r5 or later) |
Specify one or more custom headers to set in the HTTP request to be performed on the container pod before its termination. For each header, specify a header field name and header field value in the following format:
|
|
|
spec.template.spec.containers[].lifecycle.preStop.httpGet.path (Only applicable if spec.version resolves to 12.0.7.0-r5 or later) |
Specify the path to access on the HTTP server when performing the HTTP request on the container pod before its termination. |
|
|
spec.template.spec.containers[].lifecycle.preStop.httpGet.scheme (Only applicable if spec.version resolves to 12.0.7.0-r5 or later) |
Specify the scheme to use for connecting to the host when performing the HTTP request on the container pod before its termination. |
HTTP |
|
spec.template.spec.containers[].livenessProbe.failureThreshold |
The number of times the liveness probe can fail before taking an action to restart the container. (The liveness probe checks whether the container is still running or needs to be restarted.) |
1 |
|
spec.template.spec.containers[].livenessProbe.initialDelaySeconds |
How long to wait (in seconds) before performing the first probe to check whether the container is still running. Increase this value if your system cannot start the container in the default time period. |
360 |
|
spec.template.spec.containers[].livenessProbe.periodSeconds |
How often (in seconds) to perform a liveness probe to check whether the container is still running. |
10 |
|
spec.template.spec.containers[].livenessProbe.timeoutSeconds |
How long (in seconds) before the liveness probe (which checks whether the container is still running) times out. |
5 |
|
spec.template.spec.containers[].readinessProbe.failureThreshold |
The number of times the readiness probe can fail before taking an action to mark the pod as Unready. (The readiness probe checks whether the container is ready to accept traffic.) |
1 |
|
spec.template.spec.containers[].readinessProbe.initialDelaySeconds |
How long to wait (in seconds) before performing the first probe to check whether the container is ready. |
10 |
|
spec.template.spec.containers[].readinessProbe.periodSeconds |
How often (in seconds) to perform a readiness probe to check whether the container is ready. |
5 |
|
spec.template.spec.containers[].readinessProbe.timeoutSeconds |
How long (in seconds) before the readiness probe (which checks whether the container is ready) times out. |
3 |
|
spec.template.spec.containers[].resources.limits.cpu |
The upper limit of CPU cores that are allocated for running the container. Specify an integer, a fractional (for example, 0.5), or a millicore value (for example, 100m, equivalent to 0.1 core). For more information, see Resource Management for Pods and Containers in the Kubernetes documentation. When you create an integration runtime, no CPU limits are set on the resource if
spec.license.use is set to
|
1 |
|
spec.template.spec.containers[].resources.limits.ephemeral-storage |
The upper limit of local ephemeral storage (in bytes) that the container can consume in a pod. If a node fails, the data in ephemeral storage can be lost. Specify an integer or a fixed-point number with a suffix of E, P, T, G, M, k, or use the power-of-two equivalent Ei, Pi, Ti, Gi, Mi, Ki. For more information, see Local ephemeral storage in the Kubernetes documentation. |
100Mi |
|
spec.template.spec.containers[].resources.limits.memory |
The memory upper limit (in bytes) that is allocated for running the container. Specify an integer with a suffix of E, P, T, G, M, K, or a power-of-two equivalent of Ei, Pi, Ti, Gi, Mi, Ki. Applicable for the runtime container only:
|
1Gi |
|
spec.template.spec.containers[].resources.requests.cpu |
The minimum number of CPU cores that are allocated for running the container. Specify an integer, a fractional (for example, 0.5), or a millicore value (for example, 100m, equivalent to 0.1 core). For more information, see Resource Management for Pods and Containers in the Kubernetes documentation. |
300m |
|
spec.template.spec.containers[].resources.requests.ephemeral-storage |
The minimum amount of local ephemeral storage (in bytes) that the container can consume in a pod. If a node fails, the data in ephemeral storage can be lost. Specify an integer or a fixed-point number with a suffix of E, P, T, G, M, k, or use the power-of-two equivalent Ei, Pi, Ti, Gi, Mi, Ki. For more information, see Local ephemeral storage in the Kubernetes documentation. |
50Mi |
|
spec.template.spec.containers[].resources.requests.memory |
The minimum memory (in bytes) that is allocated for running the container. Specify an integer with a suffix of E, P, T, G, M, K, or a power-of-two equivalent of Ei, Pi, Ti, Gi, Mi, Ki. |
256Mi |
|
spec.template.spec.containers[].startupProbe.failureThreshold |
The number of times the startup probe can fail before taking action. (The startup probe checks whether the application in the container has started. Liveness and readiness checks are disabled until the startup probe has succeeded.) Note: If using startup probes, ensure that
spec.template.spec.containers.livenessProbe.initialDelaySeconds and
spec.template.spec.containers.readinessProbe.initialDelaySeconds are
unset.
For more information about startup probes, see Protect slow starting containers with startup probes in the Kubernetes documentation. |
120 |
|
spec.template.spec.containers[].startupProbe.initialDelaySeconds |
How long to wait (in seconds) before performing the first probe to check whether the runtime application has started. Increase this value if your system cannot start the application in the default time period. |
0 |
|
spec.template.spec.containers[].startupProbe.periodSeconds |
How often (in seconds) to perform a startup probe to check whether the runtime application has started. |
5 |
|
spec.template.spec.containers[].volumeMounts |
Details of where to mount one or more named volumes into a container. Follows the Volume Mount specification at https://pkg.go.dev/k8s.io/api@v0.20.3/core/v1#VolumeMount. For more information, see Volumes in the Kubernetes documentation. The following volume mounts are blocked:
Specify custom settings for your volume mounts as an array. Use spec.template.spec.containers[].volumeMounts with
spec.template.spec.volumes. The
spec.template.spec.containers[].volumeMounts.name value must match the name of
a volume that is specified in spec.template.spec.volumes.name, as shown in the
following example.
The following example illustrates how to add an empty directory (as a volume) to the
/cache folder in an integration runtime's pod. The mount into the
|
|
|
spec.template.spec.hostAliases.hostnames[] |
One or more hostnames that you want to map to an IP address (as a host alias), to facilitate host name resolution. Use with spec.template.spec.hostAliases.ip. Specify the hostname without an
Each host alias is added as an entry to a pod's /etc/hosts file. Example settings:
For more information about host aliases and the hosts file, see the Kubernetes documentation. |
|
|
spec.template.spec.hostAliases.ip |
An IP address that you want to map to one or more hostnames (as a host alias), to facilitate host name resolution. Use with spec.template.spec.hostAliases.hostnames[]. Each host alias is added as an entry to a pod's /etc/hosts file. |
|
|
spec.template.spec.imagePullSecrets.name |
The secret used for pulling images. |
|
|
spec.template.spec.metadata.annotations |
Specify one or more custom annotations (as arbitrary metadata) to apply to each pod that is created
during deployment. Specify each annotation as a key/value pair in the format
For example, you can add a spec.template.spec.metadata.annotations.restart value to trigger a rolling restart of your integration runtime pods, as described in Restarting integration server or integration runtime pods. The custom annotations that you specify will be merged with the default (generated) annotations. If duplicate annotation keys are detected, the custom value will overwrite the default value. |
|
|
spec.template.spec.metadata.labels |
Specify one or more custom labels (as classification metadata) to apply to each pod that is created
during deployment. Specify each label as a key/value pair in the format
The custom labels that you specify will be merged with the default (generated) labels. If duplicate label keys are detected, the custom value will overwrite the default value. |
|
|
spec.template.spec.nodeSelector |
Specify a set of key/value pairs that must be matched against the node labels to decide whether App Connect pods can be scheduled on that node. Only nodes matching all of these key/value pairs in their labels will be selected for scheduling App Connect pods. For more information, see nodeSelector and Assign Pods to Nodes in the Kubernetes documentation. Example:
Restriction: Only applicable to always-on deployments; not supported for Knative.
|
|
|
spec.template.spec.podSecurityContext.fsGroup (Only applicable if spec.version resolves to 12.0.10.0-r2 or later) |
The spec.template.spec.podSecurityContext settings define privilege and
access control settings for an integration runtime pod's containers and volumes when applicable.
When set, this additional Specifies a special supplemental group that applies to all container processes in the integration runtime pod. Specify an integer value; for example, 2000. |
|
|
spec.template.spec.podSecurityContext.fsGroupChangePolicy (Only applicable if spec.version resolves to 12.0.10.0-r2 or later) |
Defines the behavior for changing the ownership and permission of a volume before being exposed
inside the integration runtime pod. This parameter applies only to volume types that support
fsGroup-based ownership (and permissions), and has no effect on ephemeral volume types such as
Valid values are OnRootMismatch and Always. If not specified, Always is used. For more information, see Configure volume permission and ownership change policy for Pods. |
|
|
spec.template.spec.podSecurityContext.runAsGroup (Only applicable if spec.version resolves to 12.0.10.0-r2 or later) |
Specifies that all processes in any containers of the integration runtime pod run with the primary group ID (GID). The primary GID of the containers is set to root (0) if this parameter is omitted. Specify the GID as an integer; for example, 3000. |
|
|
spec.template.spec.podSecurityContext.runAsNonRoot (Only applicable if spec.version resolves to 12.0.10.0-r2 or later) |
Indicates whether the container must run as a non-root user. Valid values are
true and false.
|
|
|
spec.template.spec.podSecurityContext.runAsUser (Only applicable if spec.version resolves to 12.0.10.0-r2 or later) |
Specifies that all processes run with this user ID (UID) for any containers in the integration runtime pod. If unspecified, the value defaults to the user that is specified in the image metadata. Specify the UID as an integer; for example, 1000. |
|
|
spec.template.spec.podSecurityContext.seccompProfile.localhostProfile (Only applicable if spec.version resolves to 12.0.10.0-r2 or later) |
The spec.template.spec.podSecurityContext.seccompProfile parameters specify the Seccomp options that are used by the containers in the integration runtime pod. A preconfigured profile that is defined in a file on the node. The profile must be a descending path, relative to the kubelet's configured Seccomp profile location. Required only if spec.template.spec.podSecurityContext.seccompProfile.type is set to Localhost. For more information, see Set the Seccomp Profile for a Container. |
|
|
spec.template.spec.podSecurityContext.seccompProfile.type (Only applicable if spec.version resolves to 12.0.10.0-r2 or later) |
The type of Seccomp profile to apply. Valid options are:
For more information, see Set the Seccomp Profile for a Container. |
|
|
spec.template.spec.podSecurityContext.seLinuxOptions.level (Only applicable if spec.version resolves to 12.0.10.0-r2 or later) |
The spec.template.spec.podSecurityContext.seLinuxOptions parameters define the SELinux context to be applied to all containers. If unspecified, the container runtime allocates a random SELinux context for each container. For more information, see Assign SELinux labels to a Container. An SELinux level label that applies to the container. |
|
|
spec.template.spec.podSecurityContext.seLinuxOptions.role (Only applicable if spec.version resolves to 12.0.10.0-r2 or later) |
An SELinux role label that applies to the container. |
|
|
spec.template.spec.podSecurityContext.seLinuxOptions.type (Only applicable if spec.version resolves to 12.0.10.0-r2 or later) |
An SELinux type label that applies to the container. |
|
|
spec.template.spec.podSecurityContext.seLinuxOptions.user (Only applicable if spec.version resolves to 12.0.10.0-r2 or later) |
An SELinux user label that applies to the container. |
|
|
spec.template.spec.podSecurityContext.supplementalGroups (Only applicable if spec.version resolves to 12.0.10.0-r2 or later) |
Sets the supplementalGroups on the pod to enable a filesystem to be mounted with the correct permissions so that it can be processed by the runtime. Specify an array of groups to apply to the first process that is run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the UID of the container process. If unspecified, no additional groups are added to any container. Group memberships that are defined in the container image for the UID of the container process are still effective, even if they are not included in this array. Example:
|
|
|
spec.template.spec.podSecurityContext.sysctls (Only applicable if spec.version resolves to 12.0.10.0-r2 or later) |
An array of namespaced sysctl kernel parameters that can be applied to all containers in the integration runtime pod. (sysctl is a Linux-specific command-line tool that is used to configure kernel parameters.) Specify the properties as name/value pairs. Pods with unsupported sysctl values (by the container runtime) might fail to launch. Example:
For more information, see Using sysctls in a Kubernetes Cluster. |
|
|
spec.template.spec.podSecurityContext.windowsOptions.gmsaCredentialSpec (Only applicable if spec.version resolves to 12.0.10.0-r2 or later) |
The spec.template.spec.podSecurityContext.windowsOptions parameters define Windows-specific settings to apply to all containers. Defines where the GMSA admission webhook inlines the contents of the GMSA credential spec that is specified in spec.template.spec.podSecurityContext.windowsOptions.gmsaCredentialSpecName. |
|
|
spec.template.spec.podSecurityContext.windowsOptions.gmsaCredentialSpecName (Only applicable if spec.version resolves to 12.0.10.0-r2 or later) |
The name of the GMSA credential spec to use. |
|
|
spec.template.spec.podSecurityContext.windowsOptions.hostProcess (Only applicable if spec.version resolves to 12.0.10.0-r2 or later) |
Indicates whether a container should be run as a Valid values are All of a pod's containers must have the same effective HostProcess value; a mix of HostProcess containers and non-HostProcess containers is not allowed. |
|
|
spec.template.spec.podSecurityContext.windowsOptions.runAsUserName (Only applicable if spec.version resolves to 12.0.10.0-r2 or later) |
The username in Windows to run the entry point of the container process. If unspecified, defaults to the user that is specified in the image metadata. Specify the username as a string. |
|
|
spec.template.spec.priority (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
Pod priority settings control which pods get killed, rescheduled, or started to allow the most important pods to keep running. spec.template.spec.priority specifies an integer value, which various system components use to identify the priority of the (always-on) integration runtime pod. The higher the value, the higher the priority. If the priority admission controller is enabled, you cannot manually specify a priority value because the admission controller automatically uses the spec.template.spec.priorityClassName setting to populate this field with a resolved value. |
|
|
spec.template.spec.priorityClassName (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
A priority class name that maps to the integer value of a pod priority in spec.template.spec.priority. If specified, this class name indicates the pod's priority (or importance) relative to other pods. Valid values are as follows:
If you do not specify a class name, the priority is set as follows:
|
|
|
spec.template.spec.tolerations[].effect (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
To prevent pods from being scheduled onto inappropriate nodes, use taints together with tolerations. Tolerations allow scheduling, but don't guarantee scheduling because the scheduler also evaluates other parameters as part of its function. Apply one or more taints to a node (by running oc taint or kubectl taint with a key, value, and taint effect) to indicate that the node should repel any pods that do not tolerate the taints. Then, apply toleration settings (effect, key, operator, toleration period, and value) to a pod to allow it to be scheduled on the node if the pod's toleration matches the node's taint. For more information, see Taints and Tolerations in the Kubernetes documentation. If you need to specify one or more tolerations for an always-on integration runtime pod, you can use the collection of spec.template.spec.tolerations[].* parameters to define an array. For spec.template.spec.tolerations[].effect, specify the taint effect that
the toleration should match. (The taint effect on a node determines how that node reacts to a pod
that is not configured with appropriate tolerations.) Leave the effect empty to match all taint
effects. Alternatively, specify one of these values: |
|
|
spec.template.spec.tolerations[].key (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
Specify the taint key that the toleration applies to. Leave the key empty and set
spec.template.spec.tolerations[].operator to |
|
|
spec.template.spec.tolerations[].operator (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
Specify an operator that represents a key's relationship to the value in
spec.template.spec.tolerations[].value. Valid operators are
|
Equal |
|
spec.template.spec.tolerations[].tolerationSeconds (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
Optionally specify a period of time in seconds that determines how long the pod stays bound to a
node with a matching taint before being evicted. Applicable only for a toleration with a
By default, no value is set, which means that a pod that tolerates the taint will never be evicted. Zero and negative values are treated as 0 (evict immediately). |
|
|
spec.template.spec.tolerations[].value (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
Specify the taint value that the toleration matches to. If the operator is
|
|
|
spec.template.spec.topologySpreadConstraint[].* (Only applicable if spec.version resolves to 13.0.4.1-r1 or later) |
Specify custom topology spread constraints that control how to distribute or spread pods across topological domains such as zones, nodes, or regions in a multi-zone or multi-node cluster. You can use these settings to configure high availability and fault tolerance by spreading workloads evenly across domains. You can specify an array of topology spread constraints that allow you to define the following settings:
The following example shows the structure of the supported topology spread constraints parameters:
For more information about these settings and how they work together, see Pod Topology Spread Constraints in the Kubernetes documentation and Controlling pod placement by using pod topology spread constraints in the Red Hat OpenShift documentation. For a tutorial on how to configure topology spread constraints for Dashboard and integration runtime pods, see Configuring topology spread constraints for App Connect Dashboard and integration runtime pods. |
|
|
spec.template.spec.volumes |
Details of one or more named volumes that can be provided to the pod, to use for persisting data. Each volume must be configured with the appropriate permissions to allow the integration runtime to read or write to it as required. Follows the Volume specification at https://pkg.go.dev/k8s.io/api/core/v1#VolumeMount. For more information, see Volumes in the Kubernetes documentation. Specify custom settings for your volume types as an array. Use spec.template.spec.volumes with spec.template.spec.containers[].volumeMounts. The following example illustrates how to add an empty directory (as a volume) to the
/cache folder in an integration runtime's pod. The mount into the
|
|
|
spec.version |
The product version that the integration runtime is based on. Can be specified by using a channel or as a fully qualified version. If you specify a channel, you must ensure that the license aligns with the latest fully qualified version in the channel. If you are using IBM App Connect Operator 7.1.0 or later, the supported channels or versions will depend on the Red Hat OpenShift version that is installed in your cluster. To view the available values that you can choose from and the licensing requirements, see spec.version values and Licensing reference for IBM App Connect Operator. |
13.0 |
Load balancing
When you deploy an integration runtime, routes are created by default in Red Hat OpenShift to externally expose a service that identifies the set of pods where the integration runs. Load balancing is applied when incoming traffic is forwarded to replica pods, and the routing algorithm used depends on the type of security you've configured for your flows:
httpflows: These flows use a non-SSL route that has been modified to use theround-robin
approach in which each replica is sent a message in turn.httpsflows: These flows use an SSL-passthrough route that has been modified to use theround-robin
approach in which each replica is sent a message in turn.
To change the load balancing configuration that a route uses, you can add an appropriate
annotation to the route resource. For example, the following CR setting will switch
the route to use the round-robin algorithm:
spec:
annotations:
haproxy.router.openshift.io/balance: roundrobin
For more information about the available annotation options, see Route-specific annotations in the Red Hat OpenShift documentation.
Flow analysis
In earlier versions of the IBM App Connect Operator, when you create an integration runtime at version 12.0.12.5-r1 or earlier, the Operator loads the BAR files that are referenced in the spec.barURL custom resource setting and then analyzes the files to identify what type of flows are configured. The Operator analyzes only BAR files that are stored in an App Connect Dashboard instance or in remote repositories, and does not analyze any BAR files that are baked into a container image. When the analysis is complete, the runtime pod is configured as follows with the appropriate number of containers that are needed to process the flows.
| Type of flow | Number of containers required | Container names |
|---|---|---|
| Toolkit flow | One | runtime |
| Designer API flow | Two | runtime and designerflows |
| Designer Event flow | Four | runtime, designerflows, designereventflows, and proxy |
| Flows that include a Batch node (Applicable only for integration runtimes at version 13.0.2.2-r1 or later) |
Four | runtime, designerflows, designereventflows, and proxy |
IBM App Connect Enterprise is being updated to support the App Connect Designer connectors, which are referred to as Discovery connectors in IBM App Connect Enterprise. As a result, the way in which the BAR file analysis works has been updated for integration runtimes at version 13.0.1.0-r1 or later. Instead of just analyzing the type of flows, the Operator also analyzes all the individual nodes in all the flows across all BAR files (both IBM App Connect Enterprise and Designer).
- If all the nodes found are supported by IBM App Connect Enterprise, the IBM App Connect Operator creates only the single
runtimecontainer. - If any nodes are unsupported, the Operator reverts to the previous behavior and creates the appropriate containers.
- Batch technology isn't currently supported in IBM App Connect Enterprise so if the analysis detects a Designer flow with a Batch node, the Operator reverts to the previous behavior (even if all nodes are supported by IBM App Connect Enterprise).
The following list of Designer connectors are now supported in IBM App Connect Enterprise:
- Amazon CloudWatch request node
- Amazon DynamoDB request node
- Amazon EC2 request node
- Amazon EventBridge request node
- Amazon EventBridge event node
- Amazon Kinesis request node
- Amazon Redshift request node
- Amazon Redshift event node
- Amazon RDS request node
- Amazon S3 request node
- Amazon SES request node
- Amazon SNS request node
- Amazon SQS request node
- Anaplan request node
- Asana event node
- Asana request node
- AWS Lambda request node
- BambooHR request node
- Box request node
- Businessmap request node
- Businessmap event node
- Calendly request node
- ClickSend request node
- ClickSend event node
- CMIS request node
- CMIS event node
- Confluence request node
- Connector Development Kit input nodes
- Couchbase request node
- Coupa request node
- Coupa event node
- Crystal Ball request node
- CSV parse node
- DocuSign request node
- Dropbox request node
- Email request node
- Email event node
- Eventbrite request node
- Eventbrite event node
- Expensify request node
- Factorial HR request node
- For Each Node
- For Each NodeInput
- For Each NodeOutput
- Front request node
- Front event node
- GitHub request node
- GitHub event node
- GitLab request node
- GitLab event node
- Gmail request node
- Gmail event node
- Google Analytics request node
- Google Calendar request node
- Google Calendar event node
- Google Chat request node
- Google Cloud BigQuery request node
- Google Cloud Pub/Sub request node
- Google Cloud Pub/Sub event node
- Google Cloud Storage request node
- Google Contacts request node
- Google Drive request node
- Google Gemini request node
- Google Groups request node
- Google Sheets request node
- Google Sheets event node
- Google Tasks request node
- Google Translate request node
- Greenhouse event node
- Greenhouse request node
- HubSpot CRM request node
- HubSpot Marketing request node
- Hunter request node
- IBM Aspera request node
- IBM Cloudant® request node
- IBM Cloud Object Storage S3 request node
- IBM Db2® request node
- IBM Db2 event node
- IBM Engineering Workflow Management request node
- IBM Engineering Workflow Management event node
- IBM FileNet® Content Manager request node
- IBM Food Trust request node
- IBM Maximo® request node
- IBM Maximo event node
- IBM OpenPages® with Watson request node
- IBM OpenPages with Watson event node
- IBM Planning Analytics request node
- IBM Sterling™ Intelligent Promising request node
- IBM Sterling Order Management System request node
- IBM Targetprocess request node
- IBM Targetprocess event node
- IBM Watson Discovery request node
- IBM watsonx.ai request node
- IBM z/OS Connect request node
- If node
- Infobip request node
- Insightly request node
- Insightly event node
- JDBC request node
- Jenkins request node
- Jira request node
- Jira event node
- JSON parse node
- JSON set variable node
- LDAP request node
- LDAP event node
- Magento request node
- Magento event node
- MailChimp request node
- MailChimp event node
- Marketo request node
- Marketo event node
- Microsoft Active Directory request node
- Microsoft Active Directory event node
- Microsoft Azure Blob Storage request node
- Microsoft Azure Cosmos DB request node
- Microsoft Azure Event Hubs request node
- Microsoft Azure Event Hubs event node
- Microsoft Azure Service Bus request node
- Microsoft Azure Service Bus event node
- Microsoft Dynamics 365 for Finance and Operations request node
- Microsoft Dynamics 365 for Sales request node
- Microsoft Dynamics 365 for Sales event node
- Microsoft Entra ID request node
- Microsoft Entra ID event node
- Microsoft Excel Online request node
- Microsoft Excel Online event node
- Microsoft Exchange request node
- Microsoft Exchange event node
- Microsoft OneDrive for Business request node
- Microsoft OneNote request node
- Microsoft Power® BI request node
- Microsoft SharePoint request node
- Microsoft SharePoint event node
- Microsoft SQL Server request node
- Microsoft SQL Server event node
- Microsoft Teams request node
- Microsoft Teams event node
- Microsoft To Do request node
- Microsoft Viva Engage request node
- Microsoft Viva Engage event node
- Milvus request node
- monday.com request node
- monday.com event node
- MySQL request node
- MySQL event node
- Oracle Database request node
- Oracle Database event node
- Oracle E-Business Suite request node
- Oracle Human Capital Management request node
- Oracle Human Capital Management event node
- Pinecone Vector Database request node
- PostgreSQL request node
- PostgreSQL event node
- Redis request node
- REST request node
- Salesforce request node
- Salesforce Account Engagement request node
- Salesforce Account Engagement event node
- Salesforce Commerce Cloud Digital Data request node
- Salesforce Commerce Cloud Digital Data event node
- Salesforce Marketing Cloud request node
- SAP Ariba request node
- SAP (via OData) request node
- SAP SuccessFactors request node
- ServiceNow request node
- ServiceNow event node
- Shopify request node
- Shopify event node
- Slack request node
- Slack event node
- Snowflake request node
- Splunk request node
- Square request node
- SurveyMonkey request node
- SurveyMonkey event node
- The Weather Company request node
- Toggl Track request node
- Toggl Track event node
- Trello request node
- Twilio request node
- UKG request node
- Vespa request node
- WordPress request node
- Workday request node
- Wrike request node
- Wrike event node
- Wufoo request node
- Wufoo event node
- XML parse node
- Yapily request node
- Zendesk Service request node
- Zendesk Service event node
- Zoho Books request node
- Zoho Books event node
- Zoho CRM request node
- Zoho CRM event node
- Zoho Inventory request node
- Zoho Recruit request node
- Zoho Recruit event node
Supported platforms
Red Hat
OpenShift: Supports the
amd64, s390x, and ppc64le CPU architectures. For
more information, see Supported platforms.
Kubernetes environment: Supports
only the amd64 CPU architecture. For more information, see Supported operating environment for Kubernetes.

) or from the CLI (by
running