App Connect Integration Server reference
Use this reference to create and delete integration servers by using the Red Hat® OpenShift® web console or CLI.
Introduction
The App Connect Integration Server API enables you to create integration servers, which run integrations that were created in App Connect Designer, IBM® App Connect on IBM Cloud, or IBM App Connect Enterprise Toolkit.
Prerequisites
- Red Hat OpenShift Container Platform 4.6 is required for EUS compliance. Red Hat OpenShift Container Platform 4.10 is also supported for migration from EUS to a Continuous Delivery (CD) or Long Term Support (LTS) release.
- For any production
licensed instances, you must create a secret called
ibm-entitlement-key
in the namespace where you want to create IBM App Connect resources. For more information, see IBM Entitled Registry entitlement keys.
Red Hat OpenShift SecurityContextConstraints requirements
IBM App Connect runs under the default restricted
SecurityContextConstraints.
Resources required
Minimum recommended requirements:
- Toolkit integration:
- CPU: 0.3 Cores
- Memory: 0.3 GB
- Designer-only integration or hybrid integration:
- CPU: 1.7 Cores
- Memory: 1.77 GB
For information about how to configure these values, see Custom resource values.
Considerations for enabling tracing in the Operations Dashboard
Applicable to Cloud Pak for Integration only: If you want to use the Operations Dashboard add-on to provide cross-component transaction tracing to aid with troubleshooting and latency issues in deployed integration servers, an instance of the Operations Dashboard must be available in a specified namespace in your Cloud Pak for Integration environment. For information about creating an instance of the Operations Dashboard, verifying the deployment, and configuring settings, see Operations Dashboard: Installation and Configuring Operations Dashboard.
Transaction tracing for the integration servers in a namespace can be enabled only during deployment as follows:
- To enable tracing data to be sent to the Operations Dashboard, a one-time registration request
must be completed in the Operations Dashboard.
- The registration process is activated the first time that an integration server is deployed with tracing enabled (by setting spec.tracing.enabled to true, and then setting spec.tracing.namespace to the namespace where the Operations Dashboard was created).
- When the deployment of the integration server is complete, open the Operations Dashboard,
navigate to the
Registration requests
page (under Manage in the navigation pane), and then locate thePending
registration request that was automatically created. - Approve the registration request and then run the supplied command to create a secret in the
namespace where the integration server is deployed. This secret will store the credentials that are
required to send tracing data to the Operations Dashboard.
For more information, see Capability registration and Registration requests.
- Deploy additional integration servers in the namespace with tracing enabled.
- Use the Operations Dashboard Web Console to view tracing data in preconfigured dashboards, view distributed tracing information on a per-trace basis, generate reports and alerts, and manage configuration settings. For more information, see Web Console guide.
If you do not have the required permissions to create an Operations Dashboard instance or approve registration requests, you must work with your cluster or team administrator to complete the steps.
Creating an instance
You can create an integration server by using the Red Hat OpenShift web console or CLI.
The supplied App Connect Enterprise base image includes an IBM MQ client for connecting to remote queue managers that are within the same Red Hat OpenShift cluster as your deployed integration servers, or in an external system such as an appliance.
Before you begin
- The IBM App Connect Operator must be installed in your cluster either through a standalone deployment or an installation of IBM Cloud Pak for Integration. For further details, see the following information:
- Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).
- Decide how to control upgrades to the instance when a new version
becomes available. The spec.version value that you specify while creating the
instance will determine how that instance is upgraded after installation, and whether you will need
to specify a different license or version number for the upgrade. To help you decide whether to
specify a spec.version value that either lets you subscribe to a channel for
updates, or that uses a specific version for the instance, review the Upgrade considerations for channels, versions, and licenses
before you start this task.Namespace restriction for an instance, server, or configuration:
The namespace in which you install must be no more than 40 characters in length.
Creating an instance from the Red Hat OpenShift web console
When you create an integration server, you can define which configurations you want to apply to the integration sever.
- If required, you can create configuration objects before you create an integration server and then add references to those objects while creating the integration sever. For information about how to use the Red Hat OpenShift web console to create a configuration object before you create an integration server, see Creating a configuration object from the Red Hat OpenShift web console.
- If you have existing configuration objects that you want the integration sever to reference, you can add those references while creating the integration server, as described in the steps that follow.
To create an integration server by using the Red Hat OpenShift web console, complete the following steps:
- Applicable to IBM Cloud Pak for Integration only:
- If not already logged in, log in to the Platform Navigator for your cluster.
- From the IBM Cloud Pak menu , click OpenShift Console and log in if prompted.
- Applicable to an IBM App Connect Operator deployment only: From a browser window, log in to the Red Hat OpenShift Container Platform web console.
- From the navigation, click .
- If required, select the namespace (project) in which you installed the IBM App Connect Operator.
- From the Installed Operators page, click IBM App Connect.
- From the Operator Details page for the App Connect Operator, click the App Connect Integration Server tab. Any previously created integration servers are displayed in a table.
- Click Create IntegrationServer. Switch to the YAML view if necessary for
a finer level of control over your installation settings. The minimum custom resource (CR)
definition that is required to create an integration server is displayed.
From the Details tab on the Operator Details page, you can also locate the App Connect Integration Server tile and click Create Instance to specify installation settings for the integration server.
- Update the content of the YAML editor with the parameters and values that you require for this
integration server.
- To view the full set of parameters and values available, see Custom resource values.
- For licensing information, see Licensing reference for IBM App Connect Operator.
- You can specify one or more (existing) configurations that you want to apply by using the
spec.configurations parameter. For example:
spec: configurations: - odbc-ini-data
or
spec: configurations: - odbc-ini-data - accountsdata
or
spec: configurations: ["odbc-ini-data", "accountsdata"]
For the spec.configurations values, specify the metadata.name values for the relevant configuration objects.
Note:If this integration server contains a callable flow, you must configure the integration server to use a switch server that you created earlier. For information about how to create a switch server, see App Connect Switch Server reference. Locate the name of the switch server and then use it to configure the integration server as follows:
- From the command line, run the following command, where switchName is the
metadata.name value that was specified while creating the switch
server:
oc get switchserver switchName
- Make a note of the
AGENTCONFIGURATIONNAME
value that is shown in the output. - Add the
AGENTCONFIGURATIONNAME
value to the spec.configurations parameter; for example:configurations: - mark-101-switch-agentx
A configuration object of type
REST Admin SSL files
will be automatically created and applied to the integration server when spec.adminServerSecure is set totrue
. This default setting generates a configuration object by using a predefined ZIP file that contains self-signed certificates, and also creates a secret to store the ZIP file contents. For more information, see the spec.adminServerSecure description under Custom resource values, and Creating an instance. - From the command line, run the following command, where switchName is the
metadata.name value that was specified while creating the switch
server:
- Optional: If you prefer to use the Form view, click Form View and then complete the fields. Note that some fields might not be represented in the form.
- Click Create to start the deployment. An entry for the integration server
is shown in the IntegrationServers table, initially with a
Pending
status. - Click the integration server name to view information about its definition and current status.
On the Details tab of the page, the Conditions section reveals the progress of the deployment. You can use the breadcrumb trail to return to the (previous) Operator Details page for the App Connect Operator.
When the deployment is complete, the status is shown as
Ready
in the IntegrationServers table.
Creating an instance from the Red Hat OpenShift CLI
When you create an integration server, you can define which configurations you want to apply to the integration sever.
- If required, you can create configuration objects before you create an integration server and then add references to those objects while creating the integration sever. For information about how to use the Red Hat OpenShift CLI to create a configuration object before you create an integration server, see Creating a configuration object from the Red Hat OpenShift CLI.
- If you have existing configuration objects that you want the integration sever to reference, you can add those references while creating the integration server, as described in the steps that follow.
To create an integration server by using the Red Hat OpenShift CLI, complete the following steps:
- From your local computer, create a YAML file that contains the configuration for the integration
server that you want to create. Include the metadata.namespace parameter to
identify the namespace in which you want to create the integration server; this should be the same
namespace where the other App Connect instances or resources are
created.
Example:
apiVersion: appconnect.ibm.com/v1beta1 kind: IntegrationServer metadata: name: customerapi namespace: mynamespace spec: license: accept: true license: L-APEH-CEKET7 use: AppConnectEnterpriseProduction pod: containers: runtime: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 300m memory: 300Mi adminServerSecure: true barURL: https://contentserverdash-ibm-ace-dashboard-prod:3443/v1/directories/CustomerDatabaseV1?12345678-abf5-491d-be0e-219abcde2338 router: timeout: 120s useCommonServices: true designerFlowsOperationMode: all designerFlowsType: event-driven-or-api-flows service: endpointType: http version: 11.0.0-eus replicas: 3 logFormat: basic configurations: ["my-odbc", "my-setdbp", "my-accounts"]
- To view the full set of parameters and values that you can specify, see Custom resource values.
- For licensing information, see Licensing reference for IBM App Connect Operator.
- You can specify one or more (existing) configurations that you want to apply by using the
spec.configurations parameter. For example:
spec: configurations: - odbc-ini-data
or
spec: configurations: - odbc-ini-data - accountsdata
or
spec: configurations: ["odbc-ini-data", "accountsdata"]
For the spec.configurations values, specify the metadata.name values for the relevant configuration objects.
Note:If this integration server contains a callable flow, you must configure the integration server to use a switch server that you created earlier. For information about how to create a switch server, see App Connect Switch Server reference. Locate the name of the switch server and then use it to configure the integration server as follows:
- From the command line, run the following command, where switchName is the
metadata.name value that was specified while creating the switch
server:
oc get switchserver switchName
- Make a note of the
AGENTCONFIGURATIONNAME
value that is shown in the output. - Add the
AGENTCONFIGURATIONNAME
value to the spec.configurations parameter; for example:configurations: - mark-101-switch-agentx
A configuration object of type
REST Admin SSL files
will be automatically created and applied to the integration server when spec.adminServerSecure is set totrue
. This default setting generates a configuration object by using a predefined ZIP file that contains self-signed certificates, and also creates a secret to store the ZIP file contents. For more information, see the spec.adminServerSecure description under Custom resource values, and Creating an instance. - From the command line, run the following command, where switchName is the
metadata.name value that was specified while creating the switch
server:
You can also choose to define the configurations that you want to apply to the integration server within the same YAML file that contains the integration server configuration.
If preferred, you can define multiple configurations and integration servers within the same YAML file. Each definition can be separated with three hyphens (
---
). The configuration and integration servers will be created independently, but any configurations that you specify for an integration server will be applied during deployment.apiVersion: appconnect.ibm.com/v1beta1 kind: Configuration metadata: name: setdbp-conf namespace: mynamespace spec: data: ABCDefghIJLOMNehorewirpewpTEV843BCDefghIJLOMNorewirIJLOMNeh842lkalkkrmwo4tkjlfgBCDefghIJLOMNehhIJLOMNehBCDefghIJLOMNehorewirpewpTEV84ksjf type: setdbparms --- apiVersion: appconnect.ibm.com/v1beta1 kind: IntegrationServer metadata: name: customerapi namespace: mynamespace spec: license: accept: true license: L-APEH-CEKET7 use: AppConnectEnterpriseProduction pod: containers: runtime: resources: limits: cpu: 300m memory: 300Mi requests: cpu: 300m memory: 300Mi adminServerSecure: true router: timeout: 120s barURL: https://contentserverdash-ibm-ace-dashboard-prod:3443/v1/directories/CustomerDatabaseV1?12345678-abf5-491d-be0e-219abcde2338 useCommonServices: true designerFlowsOperationMode: all designerFlowsType: event-driven-or-api-flows service: endpointType: http version: 11.0.0-eus logFormat: json replicas: 3 configurations: ["setdbp-conf", "my-accounts"]
- Save this file with a .yaml extension; for example, customerapi_cr.yaml.
- From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
- Run the following command to create the integration server and apply any defined configurations.
(Use the name of the .yaml file that you created.)
oc apply -f customerapi_cr.yaml
- Run the following command to check the status of the integration server and verify that it is running:
oc get integrationservers -n namespace
You should also be able to view this integration server in your App Connect Dashboard instance.
Deleting an instance
If no longer required, you can uninstall an integration server by deleting it. You can delete an integration server by using the Red Hat OpenShift web console or CLI.
Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).
Deleting an instance from the Red Hat OpenShift web console
To delete an integration server by using the Red Hat OpenShift web console, complete the following steps:
- Applicable to IBM Cloud Pak for Integration only:
- If not already logged in, log in to the Platform Navigator for your cluster.
- From the IBM Cloud Pak menu , click OpenShift Console and log in if prompted.
- Applicable to an IBM App Connect Operator deployment only: From a browser window, log in to the Red Hat OpenShift Container Platform web console.
- From the navigation, click .
- If required, select the namespace (project) in which you installed the IBM App Connect Operator.
- From the Installed Operators page, click IBM App Connect.
- From the Operator Details page for the App Connect Operator, click the App Connect Integration Server tab.
- Locate the IntegrationServer instance that you want to delete.
- Click the options icon () for that row to open the options menu, and then click the Delete option.
- Confirm the deletion.
Deleting an instance from the Red Hat OpenShift CLI
To delete an integration server by using the Red Hat OpenShift CLI, complete the following steps:
- From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
- Run the following command to delete the integration server instance, where
instanceName is the value of the metadata.name parameter.
oc delete integrationserver instanceName
Custom resource values
The following table lists the configurable parameters and default values for the custom resource.
[]
depicts an array. For example, the
following notation indicates that an array of custom ports can be specified (for a service):
spec.service.ports[].fieldName. When used together with
spec.service.type, you can specify multiple port definitions as shown in the
following example:
spec:
service:
ports:
- name: config-abc
nodePort: 32000
port: 9910
protocol: TCP
targetPort: 9920
- name: config-xyz
nodePort: 31500
port: 9376
protocol: SCTP
targetPort: 9999
type: NodePort
Parameter | Description | Default |
---|---|---|
apiVersion |
The API version that identifies which schema is used for this integration server. |
appconnect.ibm.com/v1beta1 |
kind |
The resource type. |
IntegrationServer |
metadata.name |
A unique short name by which the integration server can be identified. |
|
metadata.namespace |
The namespace (project) in which the integration server is installed. The namespace in which you install must be no more than 40 characters in length. |
|
spec.adminServerSecure (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
An indication of whether to enable TLS on the admin server port for use by the integration server administration REST API and for secure communication between the App Connect Dashboard and the integration server. The administration REST API can be used to create or report security credentials for an integration server. Valid values are true and false. When set to true (the default), HTTPS interactions are enabled between the
Dashboard and integration server by using self-signed TLS certificates that are provided by a
configuration object of type If set to false, the admin server port uses HTTP. |
true |
spec.affinity (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
Specify custom affinity settings that will control the placement of pods on nodes. The custom affinity settings that you specify will completely overwrite all of the default settings. (The current default settings are shown after this table.) Custom settings are supported only for For more information about spec.affinity.nodeAffinity definitions, see Controlling pod placement on nodes using node affinity rules in the OpenShift documentation and Assign Pods to Nodes using Node Affinity in the Kubernetes documentation. |
|
spec.annotations (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
Specify one or more custom annotations (as arbitrary metadata) to apply to each pod that is created
during deployment. Specify each annotation as a key/value pair in the format
The custom annotations that you specify will be merged with the default (generated) annotations. If duplicate annotation keys are detected, the custom value will overwrite the default value. |
|
spec.barURL |
The URL of the location where the BAR file is stored. This is typically the generated content server URL of a BAR file that is being deployed to an integration server by using the App Connect Dashboard. If you want to use a custom server runtime image to deploy an integration server, use spec.pod.containers.runtime.image to specify this image. |
|
spec.configurations[] |
An array of existing configurations that you want to apply to the BAR file being deployed. These configurations must be in the same namespace as the integration server. To specify a configuration, use the metadata.name value that was specified while creating that configuration. For information about creating configurations, see Integration Server Configuration reference. To see examples of how to specify one or more values for spec.configurations, see Creating an instance from the Red Hat OpenShift web console and Creating an instance from the Red Hat OpenShift CLI. |
|
spec.defaultAppName |
A name for the default application for the deployment of independent resources. |
DefaultApplication |
spec.designerFlowsOperationMode |
Indicate whether to create a Toolkit integration or a Designer integration.
|
disabled |
spec.designerFlowsType |
Indicate the type of flow if creating a Designer integration.
This parameter is applicable for a Designer integration only, so if spec.designerFlowsOperationMode is set to disabled, you must omit spec.designerFlowsType. This parameter is supported only for certain versions. See spec.version for details. |
|
spec.disableRoutes (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
Indicate whether to disable the automatic creation of routes, which externally expose a service that identifies the set of integration server pods. Valid values are true and false. Set this value to true to disable the automatic creation of external HTTP and HTTPS routes. |
false |
spec.labels (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
Specify one or more custom labels (as classification metadata) to apply to each pod that is created
during deployment. Specify each label as a key/value pair in the format
The custom labels that you specify will be merged with the default (generated) labels. If duplicate label keys are detected, the custom value will overwrite the default value. |
|
spec.license.accept |
An indication of whether the license should be accepted. Valid values are true and false. To install, this value must be set to true. |
false |
spec.license.license |
See Licensing reference for IBM App Connect Operator for the valid values. |
|
spec.license.use |
See Licensing reference for IBM App Connect Operator for the valid values. If using an IBM Cloud Pak for Integration license, spec.useCommonServices must be set to true. |
|
spec.logFormat |
The format used for the container logs that are output to the container's console. Valid values are basic and json. |
basic |
spec.pod.containers.connectors.livenessProbe.failureThreshold (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
The number of times the liveness probe (which checks whether the container is still running) can fail before taking action. |
1 |
spec.pod.containers.connectors.livenessProbe.initialDelaySeconds (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
How long to wait (in seconds) before starting the liveness probe, which checks whether the container is still running. Increase this value if your system cannot start the container in the default time period. |
360 |
spec.pod.containers.connectors.livenessProbe.periodSeconds (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
How often (in seconds) to perform the liveness probe that checks whether the container is still running. |
10 |
spec.pod.containers.connectors.livenessProbe.timeoutSeconds (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
How long (in seconds) before the liveness probe (which checks whether the container is still running) times out. |
5 |
spec.pod.containers.connectors.readinessProbe.failureThreshold (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
The number of times the readiness probe (which checks whether the container is ready) can fail before taking action. |
1 |
spec.pod.containers.connectors.readinessProbe.initialDelaySeconds (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
How long to wait (in seconds) before starting the readiness probe, which checks whether the container is ready. |
10 |
spec.pod.containers.connectors.readinessProbe.periodSeconds (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
How often (in seconds) to perform the readiness probe that checks whether the container is ready. |
5 |
spec.pod.containers.connectors.readinessProbe.timeoutSeconds (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
How long (in seconds) before the readiness probe (which checks whether the container is ready) times out. |
3 |
spec.pod.containers.connectors.resources.limits.cpu (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
The upper limit of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
1 |
spec.pod.containers.connectors.resources.limits.memory (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
The memory upper limit (in bytes) that is allocated for running the container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
512Mi |
spec.pod.containers.connectors.resources.requests.cpu (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
The minimum number of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
250m |
spec.pod.containers.connectors.resources.requests.memory (Only applicable if spec.version resolves to 11.0.0.10-r2 or earlier) |
The minimum memory (in bytes) that is allocated for running the container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
256Mi |
spec.pod.containers.designereventflows.livenessProbe.failureThreshold (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The number of times the liveness probe (which checks whether the container is still running) can fail before taking action. |
1 |
spec.pod.containers.designereventflows.livenessProbe.initialDelaySeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How long to wait (in seconds) before starting the liveness probe, which checks whether the container is still running. Increase this value if your system cannot start the container in the default time period. |
360 |
spec.pod.containers.designereventflows.livenessProbe.periodSeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How often (in seconds) to perform the liveness probe that checks whether the container is still running. |
10 |
spec.pod.containers.designereventflows.livenessProbe.timeoutSeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How long (in seconds) before the liveness probe (which checks whether the container is still running) times out. |
5 |
spec.pod.containers.designereventflows.readinessProbe.failureThreshold (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The number of times the readiness probe (which checks whether the container is ready) can fail before taking action. |
1 |
spec.pod.containers.designereventflows.readinessProbe.initialDelaySeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How long to wait (in seconds) before starting the readiness probe, which checks whether the container is ready. |
10 |
spec.pod.containers.designereventflows.readinessProbe.periodSeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How often (in seconds) to perform the readiness probe that checks whether the container is ready. |
5 |
spec.pod.containers.designereventflows.readinessProbe.timeoutSeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How long (in seconds) before the readiness probe (which checks whether the container is ready) times out. |
4 |
spec.pod.containers.designereventflows.resources.limits.cpu (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The upper limit of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
1 |
spec.pod.containers.designereventflows.resources.limits.memory (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The memory upper limit (in bytes) that is allocated for running the container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
750Mi |
spec.pod.containers.designereventflows.resources.requests.cpu (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The minimum number of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
250m |
spec.pod.containers.designereventflows.resources.requests.memory (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The minimum memory (in bytes) that is allocated for running the container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
400Mi |
spec.pod.containers.designerflows.livenessProbe.failureThreshold |
The number of times the liveness probe (which checks whether the container is still running) can fail before taking action. |
3 1 (11.0.0.10-r2 or earlier) |
spec.pod.containers.designerflows.livenessProbe.initialDelaySeconds |
How long to wait (in seconds) before starting the liveness probe, which checks whether the container is still running. Increase this value if your system cannot start the container in the default time period. |
360 |
spec.pod.containers.designerflows.livenessProbe.periodSeconds |
How often (in seconds) to perform the liveness probe that checks whether the container is still running. |
10 |
spec.pod.containers.designerflows.livenessProbe.timeoutSeconds |
How long (in seconds) before the liveness probe (which checks whether the container is still running) times out. |
30 5 (11.0.0.10-r2 or earlier) |
spec.pod.containers.designerflows.readinessProbe.failureThreshold |
The number of times the readiness probe (which checks whether the container is ready) can fail before taking action. |
3 1 (11.0.0.10-r2 or earlier) |
spec.pod.containers.designerflows.readinessProbe.initialDelaySeconds |
How long to wait (in seconds) before starting the readiness probe, which checks whether the container is ready. |
180 10 (11.0.0.10-r2 or earlier) |
spec.pod.containers.designerflows.readinessProbe.periodSeconds |
How often (in seconds) to perform the readiness probe that checks whether the container is ready. |
10 5 (11.0.0.10-r2 or earlier) |
spec.pod.containers.designerflows.readinessProbe.timeoutSeconds |
How long (in seconds) before the readiness probe (which checks whether the container is ready) times out. |
30 3 (11.0.0.10-r2 or earlier) |
spec.pod.containers.designerflows.resources.limits.cpu |
The upper limit of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
1 |
spec.pod.containers.designerflows.resources.limits.memory |
The memory upper limit (in bytes) that is allocated for running the container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
512Mi |
spec.pod.containers.designerflows.resources.requests.cpu |
The minimum number of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
250m |
spec.pod.containers.designerflows.resources.requests.memory |
The minimum memory (in bytes) that is allocated for running the container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
256Mi |
spec.pod.containers.proxy.livenessProbe.failureThreshold (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The number of times the liveness probe (which checks whether the runtime container is still running) can fail before taking action. |
1 |
spec.pod.containers.proxy.livenessProbe.initialDelaySeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How long to wait (in seconds) before starting the liveness probe, which checks whether the runtime container is still running. Increase this value if your system cannot start the container in the default time period. |
60 |
spec.pod.containers.proxy.livenessProbe.periodSeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How often (in seconds) to perform the liveness probe that checks whether the runtime container is still running. |
5 |
spec.pod.containers.proxy.livenessProbe.timeoutSeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How long (in seconds) before the liveness probe (which checks whether the runtime container is still running) times out. |
3 |
spec.pod.containers.proxy.readinessProbe.failureThreshold (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The number of times the readiness probe (which checks whether the runtime container is ready) can fail before taking action. |
1 |
spec.pod.containers.proxy.readinessProbe.initialDelaySeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How long to wait (in seconds) before starting the readiness probe, which checks whether the runtime container is ready. |
5 |
spec.pod.containers.proxy.readinessProbe.periodSeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How often (in seconds) to perform the readiness probe that checks whether the runtime container is ready. |
5 |
spec.pod.containers.proxy.readinessProbe.timeoutSeconds (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
How long (in seconds) before the readiness probe (which checks whether the runtime container is ready) times out. |
3 |
spec.pod.containers.proxy.resources.limits.cpu (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The upper limit of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
1 |
spec.pod.containers.proxy.resources.limits.memory (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The memory upper limit (in bytes) that is allocated for running the container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
512Mi |
spec.pod.containers.proxy.resources.requests.cpu (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The minimum number of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
250m |
spec.pod.containers.proxy.resources.requests.memory (Only applicable if spec.version resolves to 11.0.0.10-r3-eus, 11.0.0.11-r1, or later) |
The minimum memory (in bytes) that is allocated for running the container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
256Mi |
spec.pod.containers.runtime.image |
The name of the custom server runtime image to use; for example,
|
|
spec.pod.containers.runtime.imagePullPolicy |
Valid values are Always, Never, and IfNotPresent. |
IfNotPresent |
spec.pod.containers.runtime.livenessProbe.failureThreshold |
The number of times the liveness probe (which checks whether the runtime container is still running) can fail before taking action. |
1 |
spec.pod.containers.runtime.livenessProbe.initialDelaySeconds |
How long to wait (in seconds) before starting the liveness probe, which checks whether the runtime container is still running. Increase this value if your system cannot start the container in the default time period. |
360 |
spec.pod.containers.runtime.livenessProbe.periodSeconds |
How often (in seconds) to perform the liveness probe that checks whether the runtime container is still running. |
10 |
spec.pod.containers.runtime.livenessProbe.timeoutSeconds |
How long (in seconds) before the liveness probe (which checks whether the runtime container is still running) times out. |
5 |
spec.pod.containers.runtime.readinessProbe.failureThreshold |
The number of times the readiness probe (which checks whether the runtime container is ready) can fail before taking action. |
1 |
spec.pod.containers.runtime.readinessProbe.initialDelaySeconds |
How long to wait (in seconds) before starting the readiness probe, which checks whether the runtime container is ready. |
10 |
spec.pod.containers.runtime.readinessProbe.periodSeconds |
How often (in seconds) to perform the readiness probe that checks whether the runtime container is ready. |
5 |
spec.pod.containers.runtime.readinessProbe.timeoutSeconds |
How long (in seconds) before the readiness probe (which checks whether the runtime container is ready) times out. |
3 |
spec.pod.containers.runtime.resources.limits.cpu |
The upper limit of CPU cores that are allocated for running the runtime container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
1 |
spec.pod.containers.runtime.resources.limits.memory |
The memory upper limit (in bytes) that is allocated for running the runtime container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
512Mi |
spec.pod.containers.runtime.resources.requests.cpu |
The minimum number of CPU cores that are allocated for running the runtime container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core). |
250m |
spec.pod.containers.runtime.resources.requests.memory |
The minimum memory (in bytes) that is allocated for running the runtime container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. |
256Mi |
spec.pod.containers.tracingagent.livenessProbe.failureThreshold |
The number of times the liveness probe (which checks whether the container is still running) can fail before taking action. |
1 |
spec.pod.containers.tracingagent.livenessProbe.initialDelaySeconds |
How long to wait (in seconds) before starting the liveness probe, which checks whether the container is still running. Increase this value if your system cannot start the container in the default time period. |
360 |
spec.pod.containers.tracingagent.livenessProbe.periodSeconds |
How often (in seconds) to perform the liveness probe that checks whether the container is still running. |
10 |
spec.pod.containers.tracingagent.livenessProbe.timeoutSeconds |
How long (in seconds) before the liveness probe (which checks whether the container is still running) times out. |
5 |
spec.pod.containers.tracingagent.readinessProbe.failureThreshold |
The number of times the readiness probe (which checks whether the container is ready) can fail before taking action. |
1 |
spec.pod.containers.tracingagent.readinessProbe.initialDelaySeconds |
How long to wait (in seconds) before starting the readiness probe, which checks whether the container is ready. |
10 |
spec.pod.containers.tracingagent.readinessProbe.periodSeconds |
How often (in seconds) to perform the readiness probe that checks whether the container is ready. |
5 |
spec.pod.containers.tracingagent.readinessProbe.timeoutSeconds |
How long (in seconds) before the readiness probe (which checks whether the container is ready) times out. |
3 |
spec.pod.containers.tracingcollector.livenessProbe.failureThreshold |
The number of times the liveness probe (which checks whether the container is still running) can fail before taking action. |
1 |
spec.pod.containers.tracingcollector.livenessProbe.initialDelaySeconds |
How long to wait (in seconds) before starting the liveness probe, which checks whether the container is still running. Increase this value if your system cannot start the container in the default time period. |
360 |
spec.pod.containers.tracingcollector.livenessProbe.periodSeconds |
How often (in seconds) to perform the liveness probe that checks whether the container is still running. |
10 |
spec.pod.containers.tracingcollector.livenessProbe.timeoutSeconds |
How long (in seconds) before the liveness probe (which checks whether the container is still running) times out. |
5 |
spec.pod.containers.tracingcollector.readinessProbe.failureThreshold |
The number of times the readiness probe (which checks whether the container is ready) can fail before taking action. |
1 |
spec.pod.containers.tracingcollector.readinessProbe.initialDelaySeconds |
How long to wait (in seconds) before starting the readiness probe, which checks whether the container is ready. |
10 |
spec.pod.containers.tracingcollector.readinessProbe.periodSeconds |
How often (in seconds) to perform the readiness probe that checks whether the container is ready. |
5 |
spec.pod.containers.tracingcollector.readinessProbe.timeoutSeconds |
How long (in seconds) before the readiness probe (which checks whether the container is ready) times out. |
3 |
spec.pod.imagePullSecrets.name |
The secret used for pulling images. |
|
spec.replicas |
The number of replica pods to run for each deployment. Increasing the number of replicas will proportionally increase the resource requirements. |
1 |
spec.router.timeout |
The timeout value (in seconds) on the OpenShift routes. |
30s |
spec.service.endpointType |
A transport protocol that will determine whether the endpoint of the deployed integration is secured. |
http |
spec.service.ports[].name (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
The name of a port definition on the service (defined by spec.service.type), which is created for accessing the set of pods. The name must contain only lowercase alphanumeric characters and a hyphen (-), and begin and end with an alphanumeric character. If you need to expose more than one port for the service, you can use the collection of spec.service.ports[].fieldName parameters to configure multiple port definitions as an array. |
|
spec.service.ports[].nodePort (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
The port on which each node listens for incoming requests for the service. Applicable when
spec.service.type is set to The port number must be in the range 30000 to 32767. Ensure that this port is not being used by another service. You can check which node ports are already in use by running the following command and then checking under the PORT(S) column in the output:
|
|
spec.service.ports[].port (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
The port that exposes the service to pods within the cluster. |
|
spec.service.ports[].protocol (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
The protocol of the port. Valid values are |
|
spec.service.ports[].targetPort (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
The port on which the pods will listen for connections from the service. The port number must be in the range 1 to 65535. |
|
spec.service.type (Only applicable if spec.version resolves to 11.0.0.10-r1 or later) |
The type of service to create for accessing the set of pods:
|
|
spec.tracing.enabled |
An indication of whether to enable transaction tracing, which will push trace data to the IBM Cloud Pak for Integration Operations Dashboard to aid with problem investigation and troubleshooting. An Operations Dashboard instance must be available to process the required registration approval for tracing, as described in Considerations for enabling tracing in the Operations Dashboard. Valid values are true and false. |
true |
spec.tracing.namespace |
The namespace where the Operations Dashboard was deployed. Applicable only if spec.tracing.enabled is set to true. |
|
spec.useCommonServices |
An indication of whether to enable use of IBM Cloud Pak foundational services (previously IBM Cloud Platform Common Services). Valid values are true and false. Must be set to true if using an IBM Cloud Pak for Integration license (specified via spec.license.use). |
true |
spec.version |
The product version that the integration server is based on. Can be specified by using a channel or as a fully qualified version. If you specify a channel, you must ensure that the license aligns with the latest available version of the IBM App Connect Operator. To view the available values that you can choose from and the licensing requirements, see spec.version values and Licensing reference for IBM App Connect Operator. If you specify a fully qualified version of 11.0.0.10-r2 or earlier, or specify a channel that resolves to 11.0.0.10-r2 or earlier, you must omit spec.designerFlowsType because it is not supported in those versions. |
11.0.0 |
- Default affinity settings
-
The default settings for spec.affinity are as follows. Note that the
labelSelector
entries are automatically generated.You can overwrite the default settings for spec.affinity.nodeAffinity with custom settings, but attempts to overwrite the default settings for spec.affinity.podAntiAffinity will be ignored.spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - amd64 - s390x podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: <copy of the pod labels> topologyKey: kubernetes.io/hostname weight: 100
Load balancing
When you deploy an integration server, routes are created by default in Red Hat OpenShift to externally expose a service that identifies the set of pods where the integration runs. Load balancing is applied when incoming traffic is forwarded to replica pods, and the routing algorithm used depends on the type of security you've configured for your flows:
http
flows: These flows use a non-SSL route that defaults to a "round-robin" approach where each replica is sent a message in turn.https
flows: These flows use an SSL-passthrough route, which defaults to the "source" approach where the source IP address is used. This means that a single source application will feed a specific replica.
To change the load balancing configuration that a route uses, you can add an appropriate
annotation to the route
resource. For example, the following CR setting will switch
the route to use the round-robin algorithm:
spec:
annotations:
haproxy.router.openshift.io/balance: roundrobin
For more information about the available annotation options, see Route-specific annotations in the Red Hat OpenShift documentation.
Limitations
Supports only the amd64
and s390x
CPU architectures. For more
information, see Supported platforms.