App Connect Integration Server reference
Use this reference to create, update, or delete integration servers by using the Red Hat® OpenShift® web console or CLI, or the CLI for a Kubernetes environment.
- Introduction
- Prerequisites
- Red Hat OpenShift SecurityContextConstraints requirements
- Resources required
- Mechanisms for providing BAR files to an integration server
- Creating an instance
- Updating the custom resource settings for an instance
- Deleting an instance
- Custom resource values
- Load balancing
- Supported platforms
Introduction
The App Connect Integration Server API enables you to create integration servers, which run integrations that were created in App Connect Designer, IBM® App Connect on IBM Cloud, or IBM App Connect Enterprise Toolkit.
Prerequisites
- If using Red Hat
OpenShift, Red Hat
OpenShift Container Platform
4.10 or 4.12 is required.
Note: For information about the custom resource (or operand) versions that are supported for each Red Hat OpenShift version, see spec.version values.
- If using a Kubernetes environment, Kubernetes 1.23.x is required.
- The IBM App Connect Operator must be installed in your cluster either through an independent deployment or an installation of IBM Cloud Pak for Integration. For further details, see the following information:
- Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).
- If you want to use the command-line interface (CLI) to log in to your cluster and run commands to create and manage your IBM App Connect instances and other resources, ensure that the required CLI tools are installed on your computer. For more information, see Installing tools for managing the cluster, containers, and other resources (on Red Hat OpenShift), or Installing tools for managing the cluster, containers, and other resources (on Kubernetes).
- You must have a Kubernetes pull secret
called
ibm-entitlement-key
in the namespace before creating the instance. For more information, see Obtaining and applying your IBM Entitled Registry entitlement key.
Red Hat OpenShift SecurityContextConstraints requirements
IBM App Connect runs under the default restricted
SecurityContextConstraints.
Resources required
Minimum recommended requirements:
- Toolkit integration for compiled BAR files:
- CPU: 0.1 Cores
- Memory: 0.35 GB
- Toolkit integration for uncompiled BAR files:
- CPU: 0.3 Cores
- Memory: 0.35 GB
- Designer-only integration or hybrid integration:
- CPU: 1.7 Cores
- Memory: 1.77 GB
For information about how to configure these values, see Custom resource values.
If you are building and running your own containers, you can choose to allocate less than 0.1 Cores for Toolkit integrations if necessary. However, this decrease in CPU for the integration server container might impact the startup times and performance of your flow. If you begin to experience issues that are related to performance, or with starting and running your integration server, check whether the problem persists by first increasing the CPU to at least 0.1 Cores before contacting IBM support.
If you want to increase the download speed of an integration server's trace log files, you can choose to allocate more CPU. For information about enabling, disabling, downloading, and clearing trace, see Trace reference and Enabling and managing trace for a deployed integration server.
Mechanisms for providing BAR files to an integration server
Integration servers and integration runtimes require two types of resources: BAR files that contain development resources, and configuration files (or objects) for setting up the integration servers or integration runtimes. When you create an integration server or integration runtime, you are required to specify one or more BAR files that contain the development resources of the App Connect Designer or IBM App Connect Enterprise Toolkit integrations that you want to deploy.
A number of mechanisms are available for providing these BAR files to integration servers and integration runtimes. Choose the mechanism that meets your requirements.
Mechanism | Description | BAR files per integration server or integration runtime |
---|---|---|
Content server |
When you use the App Connect Dashboard to upload or import BAR files for deployment to integration servers or integration runtimes, the BAR files are stored in a content server that is associated with the App Connect Dashboard instance. The content server is created as a container in the App Connect Dashboard deployment and can either store uploaded (or imported) BAR files in a volume in the container’s file system, or store them within a bucket in a simple storage service that provides object storage through a web interface. The location of a BAR file in the content server is generated as a BAR URL when a BAR file is uploaded or imported to the Dashboard. This location is specified by using the Bar URL field or spec.barURL parameter. While creating an integration server or integration runtime, you can choose only one BAR file to deploy from the content server and must reference its BAR URL in the content server. The integration server or runtime then uses this BAR URL to download the BAR file on startup, and processes the applications appropriately. If you are creating an integration server or integration runtime from the Dashboard, and use the Integrations view to specify a single BAR file to deploy, the location of this file in the content server will be automatically set in the Bar URL field or spec.barURL parameter in the Properties (or Server) view. For more information, see Creating an integration server to run your BAR file resources (for Designer integrations), Creating an integration server to run IBM App Connect Enterprise Toolkit integrations, and Creating an integration runtime to run your BAR file resources. If you are creating an integration server or integration runtime from the Red Hat OpenShift web console or CLI, or the Kubernetes CLI, and want to deploy a BAR file from the content server, you must obtain the BAR file location from the "BAR files" page (which presents a view of the content server) in the Dashboard. You can do so by using Display BAR URL in the BAR file's options menu to view and copy the supplied URL. You can then paste this value in spec.barURL in the integration server or integration runtime custom resource (CR). For more information, see Integration Server reference: Creating an instance and Integration Runtime reference: Creating an instance. The location of a BAR file in the content server is typically generated in the following format:
Where:
For example:
|
1 |
External repository (Applicable only if spec.version resolves to 12.0.1.0-r1 or later) |
While creating an integration server or integration runtime, you can choose to deploy multiple BAR files, which are stored in an external HTTPS repository system, to the integration server or integration runtime. You might find this option useful if you have set up continuous integration and continuous delivery (CI/CD) pipelines to automate and manage your DevOps processes, and are building and storing BAR files in a repository system such as JFrog Artifactory. This option enables you to directly reference one or more BAR files in your integration server or
integration runtime CR without the need to manually upload or import the BAR files to the content
server in the App Connect Dashboard or build a custom image. You will need to
provide basic (or alternative) authentication credentials for connecting to the external endpoint
where the BAR files are stored, and can do so by creating a configuration object of type
If you are creating an integration server or integration runtime from the Dashboard, you can use
the Configuration view to create (and select) a configuration object of type
BarAuth that defines the required credentials. You can then use the
Properties (or Server) view to specify the endpoint
locations of one or more BAR files in the Bar URL field or as the
spec.barURL value. If you want to be able to use the App Connect Dashboard to view your integration server or integration runtime, also
set the following parameter:
If you are creating an integration server or integration runtime from the Red Hat
OpenShift web console or CLI, or the Kubernetes CLI, you must create a configuration object of type
BarAuth that defines the
required credentials, as described in Configuration reference and BarAuth type. When you create the integration
server or integration runtime CR, you must specify the name of the configuration object in
spec.configurations and then specify the endpoint locations of one or more BAR
files in spec.barURL. If you want to be able to use the App Connect Dashboard to view your integration server or integration runtime, also
set the following parameter:
You can specify multiple BAR files as follows:
Tip: If you are using GitHub as an external repository, you must specify the
rawURL in the Bar URL field or in spec.barURL. For example: https://raw.github.ibm.com/somedir/main/bars/getHostAPI.bar
https://github.com/johndoe/somedir/raw/main/getHostAPI.bar
https://raw.githubusercontent.com/myusername/myrepo/main/My%20API.bar Some considerations apply if deploying multiple BAR files:
|
Multiple |
Custom image |
You can build a custom server runtime image that contains all the configuration for the integration server or integration runtime, including all the BAR files or applications that are required, and then use this image to deploy an integration server or integration runtime. When you create the integration server or integration runtime CR, you must reference this image
by using the following parameter:
For example:
This image must be built from the version that is specified as the spec.version value in the CR. Channels are not supported when custom images are used. |
Multiple |
Creating an instance
You can create an integration server by using the Red Hat OpenShift web console or CLI, or the CLI for a Kubernetes environment.
The supplied App Connect Enterprise base image includes an IBM MQ client for connecting to remote queue managers that are within the same Red Hat OpenShift cluster as your deployed integration servers, or in an external system such as an appliance.
Before you begin
- Ensure that the Prerequisites are met.
- Prepare the BAR files that you want to deploy to the integration server. For more information, see Mechanisms for providing BAR files to an integration server.
- Decide how to control upgrades to the instance when a new version
becomes available. The spec.version value that you specify while creating the
instance will determine how that instance is upgraded after installation, and whether you will need
to specify a different license or version number for the upgrade. To help you decide whether to
specify a spec.version value that either lets you subscribe to a channel for
updates, or that uses a specific version for the instance, review the Upgrade considerations for channels, versions, and licenses
before you start this task.Namespace restriction for an instance, server, configuration, or trace:
The namespace in which you create an instance or object must be no more than 40 characters in length.
Creating an instance from the Red Hat OpenShift web console
When you create an integration server, you can define which configurations you want to apply to the integration sever.
- If required, you can create configuration objects before you create an integration server and then add references to those objects while creating the integration sever. For information about how to use the Red Hat OpenShift web console to create a configuration object before you create an integration server, see Creating a configuration object from the Red Hat OpenShift web console.
- If you have existing configuration objects that you want the integration sever to reference, you can add those references while creating the integration server, as described in the steps that follow.
To create an integration server by using the Red Hat OpenShift web console, complete the following steps:
- Applicable to IBM Cloud Pak for Integration only:
- If not already logged in, log in to the IBM Cloud Pak Platform UI for your cluster.
- From the Platform UI home page, click Install operators or OpenShift Container Platform, and log in if prompted.
- Applicable to an independent deployment of IBM App Connect Operator only: From a browser window, log in to the Red Hat
OpenShift Container Platform web console. Ensure that you are in the
Administrator perspective
.
- From the navigation, click .
- If required, select the namespace (project) in which you installed the IBM App Connect Operator.
- From the Installed Operators page, click IBM App Connect.
- From the
Operator details
page for the App Connect Operator, click the Integration Server tab. Any previously created integration servers are displayed in a table. - Click Create IntegrationServer. Switch to the YAML view if necessary for
a finer level of control over your installation settings. The minimum custom resource (CR)
definition that is required to create an integration server is displayed.
From the Details tab on the
Operator details
page, you can also locate the Integration Server tile and click Create instance to specify installation settings for the integration server. - Update the content of the YAML editor with the parameters and values that you require for this
integration server.
- To view the full set of parameters and values available, see Custom resource values.
- For licensing information, see Licensing reference for IBM App Connect Operator.
- Specify the locations of one or more BAR files that you want to deploy. You can use the
spec.barURL parameter to either specify the URL to a BAR file that is stored in
the content server, or specify a comma-separated list of one or more BAR files in an external
endpoint. If you are deploying BAR files that are stored in an external endpoint, you will also need
a configuration object of type
BarAuth
that contains credentials for connecting to this endpoint. For more information, see Mechanisms for providing BAR files to an integration server. - You can specify one or more (existing) configurations that you want to apply by using the
spec.configurations parameter. For example:
spec: configurations: - odbc-ini-data
or
spec: configurations: - odbc-ini-data - accountsdata
or
spec: configurations: ["odbc-ini-data", "accountsdata"]
For the spec.configurations values, specify the metadata.name values for the relevant configuration objects.
Note:If this integration server contains a callable flow, you must configure the integration server to use a switch server that you created earlier. For information about how to create a switch server, see App Connect Switch Server reference. Locate the name of the switch server and then use it to configure the integration server as follows:
- From the command line, run the following command, where switchName is the
metadata.name value that was specified while creating the switch
server:
oc get switchserver switchName
- Make a note of the
AGENTCONFIGURATIONNAME
value that is shown in the output. - Add the
AGENTCONFIGURATIONNAME
value to the spec.configurations parameter; for example:configurations: - mark-101-switch-agentx
A configuration object of type
REST Admin SSL files
will be automatically created and applied to the integration server when spec.adminServerSecure is set totrue
. This default setting generates a configuration object by using a predefined ZIP file that contains self-signed certificates, and also creates a secret to store the ZIP file contents. For more information, see the spec.adminServerSecure description under Custom resource values, and Creating an instance. - From the command line, run the following command, where switchName is the
metadata.name value that was specified while creating the switch
server:
- To use the Form view, ensure that Form view is selected and then complete the fields. Note that some fields might not be represented in the form.
- Click Create to start the deployment. An entry for the integration server
is shown in the IntegrationServers table, initially with a
Pending
status. - Click the integration server name to view information about its definition and current status.
On the Details tab of the page, the Conditions section reveals the progress of the deployment. You can use the breadcrumb trail to return to the (previous)
Operator details
page for the App Connect Operator.When the deployment is complete, the status is shown as
Ready
in the IntegrationServers table.
Creating an instance from the Red Hat OpenShift CLI or Kubernetes CLI
When you create an integration server, you can define which configurations you want to apply to the integration sever.
- If required, you can create configuration objects before you create an integration server and then add references to those objects while creating the integration sever. For information about how to use the CLI to create a configuration object before you create an integration server, see Creating a configuration object from the Red Hat OpenShift CLI.
- If you have existing configuration objects that you want the integration sever to reference, you can add those references while creating the integration server, as described in the steps that follow.
To create an integration server by using the Red Hat OpenShift CLI, complete the following steps.
- From your local computer, create a YAML file that contains the configuration for the integration
server that you want to create. Include the metadata.namespace parameter to
identify the namespace in which you want to create the integration server; this should be the same
namespace where the other App Connect instances or resources are
created.
Example 1:
apiVersion: appconnect.ibm.com/v1beta1 kind: IntegrationServer metadata: name: customerapi namespace: mynamespace spec: license: accept: true license: L-SEWB-GH63KR use: CloudPakForIntegrationNonProduction pod: containers: runtime: resources: limits: cpu: 300m memory: 368Mi requests: cpu: 300m memory: 368Mi adminServerSecure: true enableMetrics: true createDashboardUsers: true barURL: https://contentserverdash-ibm-ace-dashboard-prod:3443/v1/directories/CustomerDatabaseV1?12345678-abf5-491d-be0e-219abcde2338 router: timeout: 120s designerFlowsOperationMode: local designerFlowsType: event-driven-or-api-flows service: endpointType: http version: '12.0' replicas: 3 logFormat: basic configurations: ["my-odbc", "my-setdbp", "my-accounts"]
Example 2:
apiVersion: appconnect.ibm.com/v1beta1 kind: IntegrationServer metadata: name: customerapi namespace: mynamespace spec: license: accept: true license: L-SEWB-GH63KR use: AppConnectEnterpriseProduction pod: containers: runtime: resources: limits: cpu: 300m memory: 368Mi requests: cpu: 300m memory: 368Mi adminServerSecure: true enableMetrics: true createDashboardUsers: true barURL: https://contentserverdash-ibm-ace-dashboard-prod:3443/v1/directories/CustomerDatabaseV1?12345678-abf5-491d-be0e-219abcde2338 router: timeout: 120s designerFlowsOperationMode: local designerFlowsType: event-driven-or-api-flows service: endpointType: http version: '12.0' replicas: 3 logFormat: basic configurations: ["my-odbc", "my-setdbp", "my-accounts"]
- To view the full set of parameters and values that you can specify, see Custom resource values.
- For licensing information, see Licensing reference for IBM App Connect Operator.
- Specify the locations of one or more BAR files that you want to deploy. You can use the
spec.barURL parameter to either specify the URL to a BAR file that is stored in
the content server, or specify a comma-separated list of one or more BAR files in an external
endpoint. If you are deploying BAR files that are stored in an external endpoint, you will also need
a configuration object of type
BarAuth
that contains credentials for connecting to this endpoint. For more information, see Mechanisms for providing BAR files to an integration server. - You can specify one or more (existing) configurations that you want to apply by using the
spec.configurations parameter. For example:
spec: configurations: - odbc-ini-data
or
spec: configurations: - odbc-ini-data - accountsdata
or
spec: configurations: ["odbc-ini-data", "accountsdata"]
For the spec.configurations values, specify the metadata.name values for the relevant configuration objects.
Note:If this integration server contains a callable flow, you must configure the integration server to use a switch server that you created earlier. For information about how to create a switch server, see App Connect Switch Server reference. Locate the name of the switch server and then use it to configure the integration server as follows:
- From the command line, run the following command, where switchName is the
metadata.name value that was specified while creating the switch
server:
oc get switchserver switchName
- Make a note of the
AGENTCONFIGURATIONNAME
value that is shown in the output. - Add the
AGENTCONFIGURATIONNAME
value to the spec.configurations parameter; for example:configurations: - mark-101-switch-agentx
A configuration object of type
REST Admin SSL files
will be automatically created and applied to the integration server when spec.adminServerSecure is set totrue
. This default setting generates a configuration object by using a predefined ZIP file that contains self-signed certificates, and also creates a secret to store the ZIP file contents. For more information, see the spec.adminServerSecure description under Custom resource values, and Creating an instance. - From the command line, run the following command, where switchName is the
metadata.name value that was specified while creating the switch
server:
You can also choose to define the configurations that you want to apply to the integration server within the same YAML file that contains the integration server configuration.
If preferred, you can define multiple configurations and integration servers within the same YAML file. Each definition can be separated with three hyphens (
---
) as shown in the following example. The configurations and integration servers will be created independently, but any configurations that you specify for an integration server will be applied during deployment. (In the following example, settings are defined for a new configuration and an integration server. The integration server's spec.configurations setting references the new configuration and an existing configuration that should be applied during deployment.)apiVersion: appconnect.ibm.com/v1beta1 kind: Configuration metadata: name: setdbp-conf namespace: mynamespace spec: data: ABCDefghIJLOMNehorewirpewpTEV843BCDefghIJLOMNorewirIJLOMNeh842lkalkkrmwo4tkjlfgBCDefghIJLOMNehhIJLOM type: setdbparms --- apiVersion: appconnect.ibm.com/v1beta1 kind: IntegrationServer metadata: name: customerapi namespace: mynamespace spec: license: accept: true license: L-SEWB-GH63KR use: CloudPakForIntegrationNonProduction pod: containers: runtime: resources: limits: cpu: 300m memory: 368Mi requests: cpu: 300m memory: 368Mi adminServerSecure: true enableMetrics: true createDashboardUsers: true router: timeout: 120s barURL: https://contentserverdash-ibm-ace-dashboard-prod:3443/v1/directories/CustomerDatabaseV1?12345678-abf5-491d-be0e-219abcde2338 designerFlowsOperationMode: local designerFlowsType: event-driven-or-api-flows service: endpointType: http version: '12.0' logFormat: json replicas: 3 configurations: ["setdbp-conf", "my-accounts"]
- Save this file with a .yaml extension; for example, customerapi_cr.yaml.
- From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
- Run the following command to create the integration server and apply any defined configurations.
(Use the name of the .yaml file that you created.)
oc apply -f customerapi_cr.yaml
- Run the following command to check the status of the integration server and verify that it is running:
oc get integrationservers -n namespace
You should also be able to view this integration server in your App Connect Dashboard instance.
Note: If using a Kubernetes environment, ensure that you create an ingress definition after you create this instance, to make its internal service publicly available. For more information, see Creating ingress definitions for external access to your IBM App Connect instances.
Updating the custom resource settings for an instance
If you want to change the settings of an existing integration server, you can edit its custom resource settings by using the Red Hat OpenShift web console or CLI, or the CLI for a Kubernetes environment. For example, you might want to adjust CPU or memory requests or limits, or set custom environment variables for use within the containers in the deployment.
Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).
Updating an instance from the Red Hat OpenShift web console
To update an integration server by using the Red Hat OpenShift web console, complete the following steps:
- Applicable to IBM Cloud Pak for Integration only:
- If not already logged in, log in to the IBM Cloud Pak Platform UI for your cluster.
- From the Platform UI home page, click Install operators or OpenShift Container Platform, and log in if prompted.
- Applicable to an independent deployment of IBM App Connect Operator only: From a browser window, log in to the Red Hat
OpenShift Container Platform web console. Ensure that you are in the
Administrator perspective
.
- From the navigation, click .
- If required, select the namespace (project) in which you installed the IBM App Connect Operator.
- From the Installed Operators page, click IBM App Connect.
- From the
Operator details
page for the App Connect Operator, click the Integration Server tab. - Locate and click the name of the integration server that you want to update.
- Click the YAML tab.
- Update the content of the YAML editor as required. You can update existing values, add new entries, or delete entries. For information about the available parameters and their values, see Custom resource values.
- Click Save to save your changes.
Updating an instance from the Red Hat OpenShift CLI or Kubernetes CLI
To update an integration server from the Red Hat OpenShift CLI, complete the following steps.
- From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
- From the namespace where the integration server is deployed, run the oc edit
command to partially update the instance, where instanceName is the name
(metadata.name value) of the instance.
oc edit integrationserver instanceName
The integration server CR automatically opens in the default text editor for your operating system.
- Update the contents of the file as required. You can update existing values, add new entries, or delete entries. For information about the available parameters and their values, see Custom resource values.
- Save the YAML definition and close the text editor to apply the changes.
If preferred, you can also use the oc patch command to apply a patch with some bash shell features, or use oc apply with the appropriate YAML settings.
For example, you can save the YAML settings to a file with a .yaml extension (for example, updatesettings.yaml), and then run oc patch as follows to update the settings for an instance:
oc patch integrationserver instanceName --type='merge' --patch "$(cat updatesettings.yaml)"
Deleting an instance
If no longer required, you can delete an integration server by using the Red Hat OpenShift web console or CLI, or the CLI for a Kubernetes environment.
Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).
Deleting an instance from the Red Hat OpenShift web console
To delete an integration server by using the Red Hat OpenShift web console, complete the following steps:
- Applicable to IBM Cloud Pak for Integration only:
- If not already logged in, log in to the IBM Cloud Pak Platform UI for your cluster.
- From the Platform UI home page, click Install operators or OpenShift Container Platform, and log in if prompted.
- Applicable to an independent deployment of IBM App Connect Operator only: From a browser window, log in to the Red Hat
OpenShift Container Platform web console. Ensure that you are in the
Administrator perspective
.
- From the navigation, click .
- If required, select the namespace (project) in which you installed the IBM App Connect Operator.
- From the Installed Operators page, click IBM App Connect.
- From the
Operator details
page for the App Connect Operator, click the Integration Server tab. - Locate the instance that you want to delete.
- Click the options icon (
) to open the options menu, and then click the Delete option.
- Confirm the deletion.
Deleting an instance from the Red Hat OpenShift CLI or Kubernetes CLI
To delete an integration server by using the Red Hat OpenShift CLI, complete the following steps.
- From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
- From the namespace where the integration server instance is deployed, run the following command
to delete the instance, where instanceName is the value of the
metadata.name parameter.
oc delete integrationserver instanceName
Custom resource values
The following table lists the configurable parameters and default values for the custom resource.
[]
depicts an array. For example, the
following notation indicates that an array of custom ports can be specified (for a service):
spec.service.ports[].fieldName. When used together with
spec.service.type, you can specify multiple port definitions as shown in the
following example:
spec:
service:
ports:
- name: config-abc
nodePort: 32000
port: 9910
protocol: TCP
targetPort: 9920
- name: config-xyz
nodePort: 31500
port: 9376
protocol: SCTP
targetPort: 9999
type: NodePort
Parameter | Description | Default |
---|---|---|
apiVersion |
The API version that identifies which schema is used for this integration server. |
appconnect.ibm.com/v1beta1 |
kind |
The resource type. |
IntegrationServer |
metadata.name |
A unique short name by which the integration server can be identified. |
|
metadata.namespace |
The namespace (project) in which the integration server is installed. The namespace in which you create an instance or object must be no more than 40 characters in length. |
|
spec.adminServerSecure (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
An indication of whether to enable TLS on the admin server port for use by the integration server administration REST API and for secure communication between the App Connect Dashboard and the integration server. The administration REST API can be used to create or report security credentials for an integration server or integration runtime. Valid values are true and false. When set to true (the default), HTTPS interactions are enabled between the
Dashboard and integration server by using self-signed TLS certificates that are provided by a
configuration object of type If set to false, the admin server port uses HTTP. |
true |
spec.affinity (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
Specify custom affinity settings that will control the placement of pods on nodes. The custom affinity settings that you specify will completely overwrite all of the default settings. (The current default settings are shown after this table.) Custom settings are supported only for For more information about spec.affinity.nodeAffinity definitions, see Controlling pod placement on nodes using node affinity rules in the OpenShift documentation and Assign Pods to Nodes using Node Affinity in the Kubernetes documentation. |
|
spec.annotations (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
Specify one or more custom annotations (as arbitrary metadata) to apply to each pod that is created
during deployment. Specify each annotation as a key/value pair in the format
For example, you can add a spec.annotations.restart value to trigger a rolling restart of your integration server pods, as described in Restarting integration server or integration runtime pods. The custom annotations that you specify will be merged with the default (generated) annotations. If duplicate annotation keys are detected, the custom value will overwrite the default value. |
|
spec.barURL |
Identifies the location of one or more BAR files that can be deployed to the integration server. Can be either of these values:
If you want to use a custom server runtime image to deploy an integration server, use spec.pod.containers.runtime.image to specify this image. |
|
spec.configurations[] |
An array of existing configurations that you want to apply to one or more BAR files being deployed. These configurations must be in the same namespace as the integration server. To specify a configuration, use the metadata.name value that was specified while creating that configuration. For information about creating configurations, see Configuration reference. To see examples of how to specify one or more values for spec.configurations, see Creating an instance from the Red Hat OpenShift web console and Creating an instance from the Red Hat OpenShift CLI or Kubernetes CLI. |
|
spec.createDashboardUsers |
Determines what type of web users (user IDs) are created on an integration server to allow the App Connect Dashboard to connect to it with the appropriate permissions. Valid values are true and false.
|
false (Applicable for 12.0.1.0-r1 or later) true (Applicable for 11.0.0.12-r1 or earlier) |
spec.defaultAppName |
A name for the default application for the deployment of independent resources. |
DefaultApplication |
spec.designerFlowsOperationMode |
Indicate whether to create a Toolkit integration or a Designer integration.
|
disabled |
spec.designerFlowsType |
Indicate the type of flow if creating a Designer integration.
This parameter is applicable for a Designer integration only, so if spec.designerFlowsOperationMode is set to disabled, you must omit spec.designerFlowsType. This parameter is supported only for certain versions. See spec.version for details. |
|
spec.disableRoutes (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) (Not applicable in a Kubernetes environment; will be ignored) |
Indicate whether to disable the automatic creation of routes, which externally expose a service that identifies the set of integration server pods. Valid values are true and false. Set this value to true to disable the automatic creation of external HTTP and HTTPS routes. |
false |
spec.enableMetrics (Only applicable if spec.version resolves to 11.0.0.12-r1 or later) |
Indicate whether to enable the automatic emission of metrics. Valid values are true and false. Set this value to false to stop metrics from being emitted by default. |
true |
spec.env (Only applicable if spec.version resolves to 12.0.1.0-r1 or later) |
Define custom environment variables that will be set and used within the App Connect containers in the deployment. For example, you can set a
container's timezone by declaring a This parameter exposes the Kubernetes API for declaring environment variables in the container, and as such follows the same schema. Example:
For more information, see Define Environment Variables for a Container in the Kubernetes documentation. |
|
spec.forceFlowHTTPS.enabled (Only applicable if spec.version resolves to 12.0.1.0-r4 or later) |
Indicate whether to force all HTTP Input nodes and SOAP Input nodes in all deployed flows (including their usage for inbound connections to applications, REST APIs, and integration services) in the integration server to use Transport Layer Security (TLS). Valid values are true and false. When spec.forceFlowHTTPS.enabled is set to true, you must also ensure that the protocol in spec.service.endpointType is set to https. |
false |
spec.forceFlowHTTPS.secretName (Only applicable if spec.version resolves to 12.0.1.0-r4 or later) |
Specify the name of a secret that stores a user-supplied public certificate/private key pair to use for enforcing TLS. (You can use tools such as keytool or OpenSSL to generate the certificate and key if required, but do not need to apply password protection.) This secret is required if spec.forceFlowHTTPS.enabled is set to true. You must create the secret in the namespace where the integration
server will be deployed, and can do so from the Red Hat
OpenShift web
console, or from the Red Hat
OpenShift or Kubernetes CLI. Use your preferred method to create the secret. For example, you can use the following
Secret (YAML) resource to create the secret from the web console (by using the Import
YAML icon
Or you can create the secret by running the following
command:
Note:
When you create the integration server, the IBM App Connect Operator checks for the certificate and key in the secret and adds them to a generated keystore that is protected with a password. The endpoint of the deployed integration is then secured with this certificate and key. If the secret can't be found in the namespace, the integration server will fail after 10 minutes. If you need to update the certificate and key that are stored in the secret, you can edit the Secret resource to update the tls.crt and tls.key values. When you save, the keystore is regenerated and used by the integration server without the need for a restart. |
|
spec.labels (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
Specify one or more custom labels (as classification metadata) to apply to each pod that is created
during deployment. Specify each label as a key/value pair in the format
The custom labels that you specify will be merged with the default (generated) labels. If duplicate label keys are detected, the custom value will overwrite the default value. |
|
spec.license.accept |
An indication of whether the license should be accepted. Valid values are true and false. To install, this value must be set to true. |
false |
spec.license.license |
See Licensing reference for IBM App Connect Operator for the valid values. |
|
spec.license.use |
See Licensing reference for IBM App Connect Operator for the valid values. Applicable only if spec.version resolves to 11.0.0.11-r1 or earlier: If using an IBM Cloud Pak for Integration license, spec.useCommonServices must be set to true. |
|
spec.logFormat |
The format used for the container logs that are output to the container's console. Valid values are basic and json. |
basic |
spec.pod.containers.connectors.livenessProbe.failureThreshold (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
The number of times the liveness probe (which checks whether the container is still running) can fail before taking action. |
1 |
spec.pod.containers.connectors.livenessProbe.initialDelaySeconds (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
How long to wait (in seconds) before starting the liveness probe, which checks whether the container is still running. Increase this value if your system cannot start the container in the default time period. |
360 |
spec.pod.containers.connectors.livenessProbe.periodSeconds (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
How often (in seconds) to perform the liveness probe that checks whether the container is still running. |
10 |
spec.pod.containers.connectors.livenessProbe.timeoutSeconds (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
How long (in seconds) before the liveness probe (which checks whether the container is still running) times out. |
5 |
spec.pod.containers.connectors.readinessProbe.failureThreshold (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
The number of times the readiness probe (which checks whether the container is ready) can fail before taking action. |
1 |
spec.pod.containers.connectors.readinessProbe.initialDelaySeconds (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
How long to wait (in seconds) before starting the readiness probe, which checks whether the container is ready. |
10 |
spec.pod.containers.connectors.readinessProbe.periodSeconds (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
How often (in seconds) to perform the readiness probe that checks whether the container is ready. |
5 |
spec.pod.containers.connectors.readinessProbe.timeoutSeconds (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
How long (in seconds) before the readiness probe (which checks whether the container is ready) times out. |
3 |
spec.pod.containers.connectors.resources.limits.cpu (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
The upper limit of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
1 |
spec.pod.containers.connectors.resources.limits.memory (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
The memory upper limit (in bytes) that is allocated for running the container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
512Mi |
spec.pod.containers.connectors.resources.requests.cpu (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
The minimum number of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
250m |
spec.pod.containers.connectors.resources.requests.memory (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
The minimum memory (in bytes) that is allocated for running the container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
256Mi |
spec.pod.containers.designereventflows.livenessProbe.failureThreshold (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The number of times the liveness probe (which checks whether the container is still running) can fail before taking action. |
1 |
spec.pod.containers.designereventflows.livenessProbe.initialDelaySeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How long to wait (in seconds) before starting the liveness probe, which checks whether the container is still running. Increase this value if your system cannot start the container in the default time period. |
360 |
spec.pod.containers.designereventflows.livenessProbe.periodSeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How often (in seconds) to perform the liveness probe that checks whether the container is still running. |
10 |
spec.pod.containers.designereventflows.livenessProbe.timeoutSeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How long (in seconds) before the liveness probe (which checks whether the container is still running) times out. |
5 |
spec.pod.containers.designereventflows.readinessProbe.failureThreshold (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The number of times the readiness probe (which checks whether the container is ready) can fail before taking action. |
1 |
spec.pod.containers.designereventflows.readinessProbe.initialDelaySeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How long to wait (in seconds) before starting the readiness probe, which checks whether the container is ready. |
10 |
spec.pod.containers.designereventflows.readinessProbe.periodSeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How often (in seconds) to perform the readiness probe that checks whether the container is ready. |
5 |
spec.pod.containers.designereventflows.readinessProbe.timeoutSeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How long (in seconds) before the readiness probe (which checks whether the container is ready) times out. |
4 |
spec.pod.containers.designereventflows.resources.limits.cpu (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The upper limit of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
1 |
spec.pod.containers.designereventflows.resources.limits.memory (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The memory upper limit (in bytes) that is allocated for running the container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
750Mi |
spec.pod.containers.designereventflows.resources.requests.cpu (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The minimum number of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
250m |
spec.pod.containers.designereventflows.resources.requests.memory (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The minimum memory (in bytes) that is allocated for running the container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
400Mi |
spec.pod.containers.designerflows.livenessProbe.failureThreshold |
The number of times the liveness probe (which checks whether the container is still running) can fail before taking action. |
3 1 (11.0.0.10-r2 or earlier) |
spec.pod.containers.designerflows.livenessProbe.initialDelaySeconds |
How long to wait (in seconds) before starting the liveness probe, which checks whether the container is still running. Increase this value if your system cannot start the container in the default time period. |
360 |
spec.pod.containers.designerflows.livenessProbe.periodSeconds |
How often (in seconds) to perform the liveness probe that checks whether the container is still running. |
10 |
spec.pod.containers.designerflows.livenessProbe.timeoutSeconds |
How long (in seconds) before the liveness probe (which checks whether the container is still running) times out. |
30 5 (11.0.0.10-r2 or earlier) |
spec.pod.containers.designerflows.readinessProbe.failureThreshold |
The number of times the readiness probe (which checks whether the container is ready) can fail before taking action. |
3 1 (11.0.0.10-r2 or earlier) |
spec.pod.containers.designerflows.readinessProbe.initialDelaySeconds |
How long to wait (in seconds) before starting the readiness probe, which checks whether the container is ready. |
180 10 (11.0.0.10-r2 or earlier) |
spec.pod.containers.designerflows.readinessProbe.periodSeconds |
How often (in seconds) to perform the readiness probe that checks whether the container is ready. |
10 5 (11.0.0.10-r2 or earlier) |
spec.pod.containers.designerflows.readinessProbe.timeoutSeconds |
How long (in seconds) before the readiness probe (which checks whether the container is ready) times out. |
30 3 (11.0.0.10-r2 or earlier) |
spec.pod.containers.designerflows.resources.limits.cpu |
The upper limit of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
1 |
spec.pod.containers.designerflows.resources.limits.memory |
The memory upper limit (in bytes) that is allocated for running the container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
512Mi |
spec.pod.containers.designerflows.resources.requests.cpu |
The minimum number of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
250m |
spec.pod.containers.designerflows.resources.requests.memory |
The minimum memory (in bytes) that is allocated for running the container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
256Mi |
spec.pod.containers.proxy.livenessProbe.failureThreshold (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The number of times the liveness probe (which checks whether the runtime container is still running) can fail before taking action. |
1 |
spec.pod.containers.proxy.livenessProbe.initialDelaySeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How long to wait (in seconds) before starting the liveness probe, which checks whether the runtime container is still running. Increase this value if your system cannot start the container in the default time period. |
60 |
spec.pod.containers.proxy.livenessProbe.periodSeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How often (in seconds) to perform the liveness probe that checks whether the runtime container is still running. |
5 |
spec.pod.containers.proxy.livenessProbe.timeoutSeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How long (in seconds) before the liveness probe (which checks whether the runtime container is still running) times out. |
3 |
spec.pod.containers.proxy.readinessProbe.failureThreshold (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The number of times the readiness probe (which checks whether the runtime container is ready) can fail before taking action. |
1 |
spec.pod.containers.proxy.readinessProbe.initialDelaySeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How long to wait (in seconds) before starting the readiness probe, which checks whether the runtime container is ready. |
5 |
spec.pod.containers.proxy.readinessProbe.periodSeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How often (in seconds) to perform the readiness probe that checks whether the runtime container is ready. |
5 |
spec.pod.containers.proxy.readinessProbe.timeoutSeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How long (in seconds) before the readiness probe (which checks whether the runtime container is ready) times out. |
3 |
spec.pod.containers.proxy.resources.limits.cpu (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The upper limit of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
1 |
spec.pod.containers.proxy.resources.limits.memory (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The memory upper limit (in bytes) that is allocated for running the container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
512Mi |
spec.pod.containers.proxy.resources.requests.cpu (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The minimum number of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
250m |
spec.pod.containers.proxy.resources.requests.memory (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The minimum memory (in bytes) that is allocated for running the container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
256Mi |
spec.pod.containers.runtime.image |
The name of the custom server runtime image to use; for example,
|
|
spec.pod.containers.runtime.imagePullPolicy |
Indicate whether you want images to be pulled every time, never, or only if they're not present. Valid values are Always, Never, and IfNotPresent. |
IfNotPresent |
spec.pod.containers.runtime.lifecycle.postStart.exec.command[] (Only applicable if spec.version resolves to 12.0.4.0-r2 or later) |
An array of (one or more) commands to execute immediately after the runtime container is created (or started). The working directory for the command is the root ('/') in the container's file system. The command executes without being run in a shell, which means that traditional shell instructions ('|', etc) will not work. To use a shell, explicitly call out to that shell. An exit status of 0 (zero) indicates a live or healthy status, and a non-zero value indicates an unhealthy status. For more information, see Container Lifecycle Hooks and Attach Handlers to Container Lifecycle Events in the Kubernetes documentation. |
|
spec.pod.containers.runtime.lifecycle.postStart.httpGet.host (Only applicable if spec.version resolves to 12.0.4.0-r2 or later) |
Specify the host name to connect to, to perform the HTTP request on the runtime container pod immediately after it starts. Defaults to the pod IP. You can alternatively set "Host" in spec.pod.containers.runtime.lifecycle.postStart.httpGet.httpHeaders. |
|
spec.pod.containers.runtime.lifecycle.postStart.httpGet.httpHeaders (Only applicable if spec.version resolves to 12.0.4.0-r2 or later) |
Specify one or more custom headers to set in the HTTP request to be performed on the runtime container pod immediately after it starts. For each header, specify a header field name and header field value in the following format:
|
|
spec.pod.containers.runtime.lifecycle.postStart.httpGet.path (Only applicable if spec.version resolves to 12.0.4.0-r2 or later) |
Specify the path to access on the HTTP server when performing the HTTP request on the runtime container pod immediately after it starts. |
|
spec.pod.containers.runtime.lifecycle.postStart.httpGet.scheme (Only applicable if spec.version resolves to 12.0.4.0-r2 or later) |
Specify the scheme to use for connecting to the host when performing the HTTP request on the runtime container pod immediately after it starts. |
HTTP |
spec.pod.containers.runtime.lifecycle.preStop.exec.command[] (Only applicable if spec.version resolves to 12.0.4.0-r2 or later) |
An array of (one or more) commands to execute inside the runtime container before its pod is terminated. Use the spec.pod.containers.runtime.lifecycle.preStop.* settings to configure the lifecycle of the runtime container to allow existing transactions to complete before the pod is terminated due to an API request or a management event (such as failure of a liveness or startup probe, or preemption). This allows rolling updates to occur without breaking transactions (unless they are long running). The countdown for the pod's termination grace period begins before the preStop hook is executed. The working directory for the command is the root ('/') in the container's file system. The command executes without being run in a shell, which means that traditional shell instructions ('|', etc) will not work. To use a shell, explicitly call out to that shell. An exit status of 0 (zero) indicates a live or healthy status, and a non-zero value indicates an unhealthy status. For example, this default preStop setting ensures that rolling updates do
not result in lost messages on the runtime:
For more information, see Container Lifecycle Hooks and Attach Handlers to Container Lifecycle Events in the Kubernetes documentation. |
|
spec.pod.containers.runtime.lifecycle.preStop.httpGet.host (Only applicable if spec.version resolves to 12.0.4.0-r2 or later) |
Specify the host name to connect to, to perform the HTTP request on the runtime container pod before its termination. Defaults to the pod IP. You can alternatively set "Host" in spec.pod.containers.runtime.lifecycle.preStop.httpGet.httpHeaders. |
|
spec.pod.containers.runtime.lifecycle.preStop.httpGet.httpHeaders (Only applicable if spec.version resolves to 12.0.4.0-r2 or later) |
Specify one or more custom headers to set in the HTTP request to be performed on the runtime container pod before its termination. For each header, specify a header field name and header field value in the following format:
|
|
spec.pod.containers.runtime.lifecycle.preStop.httpGet.path (Only applicable if spec.version resolves to 12.0.4.0-r2 or later) |
Specify the path to access on the HTTP server when performing the HTTP request on the runtime container pod before its termination. |
|
spec.pod.containers.runtime.lifecycle.preStop.httpGet.scheme (Only applicable if spec.version resolves to 12.0.4.0-r2 or later) |
Specify the scheme to use for connecting to the host when performing the HTTP request on the runtime container pod before its termination. |
HTTP |
spec.pod.containers.runtime.livenessProbe.failureThreshold |
The number of times the liveness probe can fail before taking an action to restart the container. (The liveness probe checks whether the runtime container is still running or needs to be restarted.) |
1 |
spec.pod.containers.runtime.livenessProbe.initialDelaySeconds |
How long to wait (in seconds) before performing the first probe to check whether the runtime container is still running. Increase this value if your system cannot start the container in the default time period. |
360 |
spec.pod.containers.runtime.livenessProbe.periodSeconds |
How often (in seconds) to perform a liveness probe to check whether the runtime container is still running. |
10 |
spec.pod.containers.runtime.livenessProbe.timeoutSeconds |
How long (in seconds) before the liveness probe (which checks whether the runtime container is still running) times out. |
5 |
spec.pod.containers.runtime.readinessProbe.failureThreshold |
The number of times the readiness probe can fail before taking an action to mark the pod as Unready. (The readiness probe checks whether the runtime container is ready to accept traffic.) |
1 |
spec.pod.containers.runtime.readinessProbe.initialDelaySeconds |
How long to wait (in seconds) before performing the first probe to check whether the runtime container is ready. |
10 |
spec.pod.containers.runtime.readinessProbe.periodSeconds |
How often (in seconds) to perform a readiness probe to check whether the runtime container is ready. |
5 |
spec.pod.containers.runtime.readinessProbe.timeoutSeconds |
How long (in seconds) before the readiness probe (which checks whether the runtime container is ready) times out. |
3 |
spec.pod.containers.runtime.resources.limits.cpu |
The upper limit of CPU cores that are allocated for running the runtime container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
1 |
spec.pod.containers.runtime.resources.limits.memory |
The memory upper limit (in bytes) that is allocated for running the runtime container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
512Mi |
spec.pod.containers.runtime.resources.requests.cpu |
The minimum number of CPU cores that are allocated for running the runtime container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
250m |
spec.pod.containers.runtime.resources.requests.memory |
The minimum memory (in bytes) that is allocated for running the runtime container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
256Mi |
spec.pod.containers.runtime.startupProbe.failureThreshold (Only applicable if spec.version resolves to 12.0.3.0-r2 or later) |
The number of times the startup probe can fail before taking action. (The startup probe checks whether the application in the runtime container has started. Liveness and readiness checks are disabled until the startup probe has succeeded.) Note: If using startup probes, ensure that
spec.pod.containers.connectors.livenessProbe.initialDelaySeconds and
spec.pod.containers.connectors.readinessProbe.initialDelaySeconds are
unset.
For more information about startup probes, see Protect slow starting containers with startup probes in the Kubernetes documentation. |
120 |
spec.pod.containers.runtime.startupProbe.initialDelaySeconds (Only applicable if spec.version resolves to 12.0.3.0-r2 or later) |
How long to wait (in seconds) before performing the first probe to check whether the runtime application has started. Increase this value if your system cannot start the application in the default time period. |
0 |
spec.pod.containers.runtime.startupProbe.periodSeconds (Only applicable if spec.version resolves to 12.0.3.0-r2 or later) |
How often (in seconds) to perform a startup probe to check whether the runtime application has started. |
5 |
spec.pod.containers.runtime.volumeMounts (Only applicable if spec.version resolves to 11.0.0.11-r2 or later) |
Details of where to mount for one or more named volumes, into the runtime container. Use with spec.pod.volumes. Follows the Volume Mount specification at https://pkg.go.dev/k8s.io/api@v0.20.3/core/v1#VolumeMount. For more information, see Volumes in the Kubernetes documentation. The following volume mounts are blocked:
Specify custom settings for your volume mounts. For an example of these settings, see the Example: Enabling custom volume mounts section that is shown after this table. |
|
spec.pod.containers.tracingagent.livenessProbe.failureThreshold (Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier) |
The number of times the liveness probe (which checks whether the container is still running) can fail before taking action. |
1 |
spec.pod.containers.tracingagent.livenessProbe.initialDelaySeconds (Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier) |
How long to wait (in seconds) before starting the liveness probe, which checks whether the container is still running. Increase this value if your system cannot start the container in the default time period. |
360 |
spec.pod.containers.tracingagent.livenessProbe.periodSeconds (Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier) |
How often (in seconds) to perform the liveness probe that checks whether the container is still running. |
10 |
spec.pod.containers.tracingagent.livenessProbe.timeoutSeconds (Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier) |
How long (in seconds) before the liveness probe (which checks whether the container is still running) times out. |
5 |
spec.pod.containers.tracingagent.readinessProbe.failureThreshold (Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier) |
The number of times the readiness probe (which checks whether the container is ready) can fail before taking action. |
1 |
spec.pod.containers.tracingagent.readinessProbe.initialDelaySeconds (Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier) |
How long to wait (in seconds) before starting the readiness probe, which checks whether the container is ready. |
10 |
spec.pod.containers.tracingagent.readinessProbe.periodSeconds (Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier) |
How often (in seconds) to perform the readiness probe that checks whether the container is ready. |
5 |
spec.pod.containers.tracingagent.readinessProbe.timeoutSeconds (Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier) |
How long (in seconds) before the readiness probe (which checks whether the container is ready) times out. |
3 |
spec.pod.containers.tracingcollector.livenessProbe.failureThreshold (Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier) |
The number of times the liveness probe (which checks whether the container is still running) can fail before taking action. |
1 |
spec.pod.containers.tracingcollector.livenessProbe.initialDelaySeconds (Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier) |
How long to wait (in seconds) before starting the liveness probe, which checks whether the container is still running. Increase this value if your system cannot start the container in the default time period. |
360 |
spec.pod.containers.tracingcollector.livenessProbe.periodSeconds (Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier) |
How often (in seconds) to perform the liveness probe that checks whether the container is still running. |
10 |
spec.pod.containers.tracingcollector.livenessProbe.timeoutSeconds (Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier) |
How long (in seconds) before the liveness probe (which checks whether the container is still running) times out. |
5 |
spec.pod.containers.tracingcollector.readinessProbe.failureThreshold (Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier) |
The number of times the readiness probe (which checks whether the container is ready) can fail before taking action. |
1 |
spec.pod.containers.tracingcollector.readinessProbe.initialDelaySeconds (Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier) |
How long to wait (in seconds) before starting the readiness probe, which checks whether the container is ready. |
10 |
spec.pod.containers.tracingcollector.readinessProbe.periodSeconds (Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier) |
How often (in seconds) to perform the readiness probe that checks whether the container is ready. |
5 |
spec.pod.containers.tracingcollector.readinessProbe.timeoutSeconds (Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier) |
How long (in seconds) before the readiness probe (which checks whether the container is ready) times out. |
3 |
spec.pod.hostAliases.hostnames[] (Only applicable if spec.version resolves to 11.0.0.12-r1 or later) |
One or more hostnames that you want to map to an IP address (as a host alias), to facilitate host name resolution. Use with spec.pod.hostAliases.ip. Each host alias is added as an entry to a pod's /etc/hosts file. Example settings:
For more information about host aliases and the hosts file, see the Kubernetes documentation. |
|
spec.pod.hostAliases.ip (Only applicable if spec.version resolves to 11.0.0.12-r1 or later) |
An IP address that you want to map to one or more hostnames (as a host alias), to facilitate host name resolution. Use with spec.pod.hostAliases.hostnames[]. Each host alias is added as an entry to a pod's /etc/hosts file. |
|
spec.pod.imagePullSecrets.name |
The secret used for pulling images. |
|
spec.pod.priority (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
Pod priority settings control which pods get killed, rescheduled, or started to allow the most important pods to keep running. spec.pod.priority specifies an integer value, which various system components use to identify the priority of the integration server pod. The higher the value, the higher the priority. If the priority admission controller is enabled, you cannot manually specify a priority value because the admission controller automatically uses the spec.pod.priorityClassName setting to populate this field with a resolved value. |
|
spec.pod.priorityClassName (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
A priority class name that maps to the integer value of a pod priority in spec.pod.priority. If specified, this class name indicates the pod's priority (or importance) relative to other pods. Valid values are as follows:
If you do not specify a class name, the priority is set as follows:
|
|
spec.pod.tolerations[].effect (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
To prevent pods from being scheduled onto inappropriate nodes, use taints together with tolerations. Tolerations allow scheduling, but don't guarantee scheduling because the scheduler also evaluates other parameters as part of its function. Apply one or more taints to a node (by running oc taint with a key, value, and taint effect) to indicate that the node should repel any pods that do not tolerate the taints. Then, apply toleration settings (effect, key, operator, toleration period, and value) to a pod to allow it to be scheduled on the node if the pod's toleration matches the node's taint. For more information, see Taints and Tolerations in the Kubernetes documentation. If you need to specify one or more tolerations for a pod, you can use the collection of spec.pod.tolerations[].* parameters to define an array. For spec.pod.tolerations[].effect, specify the taint effect that the
toleration should match. (The taint effect on a node determines how that node reacts to a pod that
is not configured with appropriate tolerations.) Leave the effect empty to match all taint effects.
Alternatively, specify one of these values: |
|
spec.pod.tolerations[].key (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
Specify the taint key that the toleration applies to. Leave the key empty and set
spec.pod.tolerations[].operator to |
|
spec.pod.tolerations[].operator (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
Specify an operator that represents a key's relationship to the value in
spec.pod.tolerations[].value. Valid operators are |
Equal |
spec.pod.tolerations[].tolerationSeconds (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
Optionally specify a period of time in seconds that determines how long the pod stays bound to a
node with a matching taint before being evicted. Applicable only for a toleration with a
By default, no value is set, which means that a pod that tolerates the taint will never be evicted. Zero and negative values are treated as 0 (evict immediately). |
|
spec.pod.tolerations[].value (Only applicable if spec.version resolves to 12.0.9.0-r2 or later) |
Specify the taint value that the toleration matches to. If the operator is
|
|
spec.pod.volumes (Only applicable if spec.version resolves to 11.0.0.11-r2 or later) |
Details of one or more named volumes that can be provided to the pod, to use for persisting data. Each volume must be configured with the appropriate permissions to allow the integration server to read or write to it as required. Follows the Volume specification at https://pkg.go.dev/k8s.io/api/core/v1#VolumeMount. For more information, see Volumes in the Kubernetes documentation. Specify custom settings for your volume types. For an example of these settings, see the Example: Enabling custom volume mounts section that is shown after this table. Use with spec.pod.containers.runtime.volumeMounts. |
|
spec.replicas |
The number of replica pods to run for each deployment. Increasing the number of replicas will proportionally increase the resource requirements. |
1 |
spec.router.https.host (Only applicable if spec.version resolves to 12.0.1.0-r4 or later) |
After a route is created, the spec.router.https.host setting cannot be changed. Tip: If required, you can define a second route (with labels) on the integration server
by using the spec.router.https2.host and
spec.router.https2.labels parameters.
Note:
On OpenShift, routers will typically use the oldest route with a given host when resolving conflicts. If spec.disableRoutes is set to true, you cannot specify values for spec.router.https or spec.router.https2. |
|
spec.router.https.labels (Only applicable if spec.version resolves to 12.0.1.0-r4 or later) |
The labels are applied when a route is created. If the labels are removed, the standard set of labels will be used. |
|
spec.router.https2.host (Only applicable if spec.version resolves to 12.0.1.0-r4 or later) |
This value must conform to the DNS 952 subdomain conventions. If unspecified, a hostname is automatically generated for the HTTPS route. This parameter can optionally be used with spec.router.https2.labels. After a route is created, the spec.router.https2.host setting cannot be changed. Deleting spec.router.https2.host from the CR will cause the route to be deleted. Note:
On OpenShift, routers will typically use the oldest route with a given host when resolving conflicts. If spec.disableRoutes is set to true, you cannot specify values for spec.router.https or spec.router.https2. |
|
spec.router.https2.labels (Only applicable if spec.version resolves to 12.0.1.0-r4 or later) |
The labels are applied when a route is created. If the labels are removed, the standard set of labels will be used. |
|
spec.router.timeout |
|
30s |
spec.router.webhook.host (Only applicable if spec.version resolves to 12.0.7.0-r1 or later) |
This parameter can optionally be used with spec.router.webhook.labels. |
|
spec.router.webhook.labels (Only applicable if spec.version resolves to 12.0.7.0-r1 or later) |
The labels are applied when a route is created. |
|
spec.service.endpointType (Not applicable in a Kubernetes environment; will be ignored) |
Specify a transport protocol that defines whether the endpoint of the deployed integration is secured. Valid values are as follows:
|
http |
spec.service.ports[].name (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
The name of a port definition on the service (defined by spec.service.type), which is created for accessing the set of pods. The name must contain only lowercase alphanumeric characters and a hyphen (-), and begin and end with an alphanumeric character. If you need to expose more than one port for the service, you can use the collection of spec.service.ports[].fieldName parameters to configure multiple port definitions as an array. |
|
spec.service.ports[].nodePort (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
The port on which each node listens for incoming requests for the service. Applicable when
spec.service.type is set to The port number must be in the range 30000 to 32767. Ensure that this port is not being used by another service. You can check which node ports are already in use by running the following command and then checking under the PORT(S) column in the output:
|
|
spec.service.ports[].port (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
The port that exposes the service to pods within the cluster. |
|
spec.service.ports[].protocol (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
The protocol of the port. Valid values are |
|
spec.service.ports[].targetPort (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
The port on which the pods will listen for connections from the service. The port number must be in the range 1 to 65535. |
|
spec.service.type (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
The type of service to create for accessing the set of pods:
|
ClusterIP |
spec.tracing.enabled (Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier) |
An indication of whether to enable transaction tracing, which will push trace data to the IBM Cloud Pak for Integration Operations Dashboard to aid with problem investigation and troubleshooting. An Operations Dashboard (Integration tracing) instance must be available to process the required registration approval for tracing. Valid values are true and false. Note: Support for the Operations Dashboard is restricted to these versions, where it is also deprecated:
If you want to implement tracing, you can enable user or service trace for the integration server as described in Trace reference. You can also configure OpenTelemetry tracing, although support is available only for integration runtimes. For more information, see Configuring OpenTelemetry tracing for integration runtimes. |
true |
spec.tracing.namespace (Only applicable if spec.version resolves to 12.0.8.0-r1 or earlier) |
The namespace where the Operations Dashboard (Integration tracing) was deployed. Applicable only if spec.tracing.enabled is set to true. |
|
spec.useCommonServices (Deprecated in 11.0.0.11-r2 or later) |
An indication of whether to enable use of IBM Cloud Pak foundational services (formerly IBM Cloud Platform Common Services). Valid values are true and false.
|
false |
spec.version |
The product version that the integration server is based on. Can be specified by using a channel or as a fully qualified version. If you specify a channel, you must ensure that the license aligns with the latest fully qualified version in the channel. If you are using IBM App Connect Operator 7.1.0 or later, the supported channels or versions will depend on the Red Hat OpenShift version that is installed in your cluster. To view the available values that you can choose from and the licensing requirements, see spec.version values and Licensing reference for IBM App Connect Operator. If you specify a fully qualified version of 11.0.0.10-r2 or earlier, or specify a channel that resolves to 11.0.0.10-r2 or earlier, you must omit spec.designerFlowsType because it is not supported in those versions. |
12.0 |
- Default affinity settings
-
The default settings for spec.affinity are as follows. Note that the
labelSelector
entries are automatically generated.You can overwrite the default settings for spec.affinity.nodeAffinity with custom settings, but attempts to overwrite the default settings for spec.affinity.podAntiAffinity will be ignored.spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - amd64 - s390x podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: <copy of the pod labels> topologyKey: kubernetes.io/hostname weight: 100
- Example: Enabling custom volume mounts
-
The following example illustrates how to add an empty directory (as a volume) to the /cache folder in an integration server's pod:
spec: pod: containers: runtime: volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {}
Load balancing
When you deploy an integration server, routes are created by default in Red Hat OpenShift to externally expose a service that identifies the set of pods where the integration runs. Load balancing is applied when incoming traffic is forwarded to replica pods, and the routing algorithm used depends on the type of security you've configured for your flows:
http
flows: These flows use a non-SSL route that defaults to a "round-robin" approach where each replica is sent a message in turn.https
flows (integration servers at version 12.0.2.0-r1 or earlier): These flows use an SSL-passthrough route that defaults to the "source" approach in which the source IP address is used. This means that a single source application will feed a specific replica.https
flows (integration servers at version 12.0.2.0-r2 or later): These flows use an SSL-passthrough route that has been modified to use the "round-robin" approach in which each replica is sent a message in turn.
To change the load balancing configuration that a route uses, you can add an appropriate
annotation to the route
resource. For example, the following CR setting will switch
the route to use the round-robin algorithm:
spec:
annotations:
haproxy.router.openshift.io/balance: roundrobin
For more information about the available annotation options, see Route-specific annotations in the Red Hat OpenShift documentation.
Supported platforms
Red Hat
OpenShift: Supports the
amd64
, s390x
, and ppc64le
CPU architectures. For
more information, see Supported platforms.
Kubernetes environment: Supports
only the amd64
CPU architecture. For more information, see Supported operating environment for Kubernetes.