App Connect Switch Server reference

Use this reference to create, update, or delete switch servers by using the Red Hat® OpenShift® web console or CLI, or the CLI for a Kubernetes environment.

Introduction

The App Connect Switch Server API enables you to create IBM® App Connect switch servers that are required to configure connectivity for hybrid integrations. When used with a connectivity agent, a calling flow within an integration in App Connect Designer or App Connect Dashboard can be used to invoke running callable flows in either IBM App Connect Enterprise or IBM Integration Bus on premises. Similarly, an integration in App Connect Designer or App Connect Dashboard can interact with an application in a private network by using a connectivity agent.

For more information about callable flows, see Callable message flows in the IBM App Connect Enterprise documentation.

Usage guidelines:

One switch server per namespace (project) is recommended.

Prerequisites

Red Hat OpenShift SecurityContextConstraints requirements

IBM App Connect runs under the default restricted SecurityContextConstraints.

Resources required

Minimum recommended requirements:

  • CPU: 1 Core
  • Memory: 1 GB

For information about how to configure these values, see Custom resource values.

Creating an instance

You can create a switch server by using the Red Hat OpenShift web console or CLI, or the CLI for a Kubernetes environment.

Before you begin

  • Ensure that the Prerequisites are met.
  • Decide how to control upgrades to the instance when a new version becomes available. The spec.version value that you specify while creating the instance will determine how that instance is upgraded after installation, and whether you will need to specify a different license or version number for the upgrade. To help you decide whether to specify a spec.version value that either lets you subscribe to a channel for updates, or that uses a specific version for the instance, review the Upgrade considerations for channels, versions, and licenses before you start this task.
    Namespace restriction for an instance, server, configuration, or trace:

    The namespace in which you create an instance or object must be no more than 40 characters in length.

When you create the switch server, it is recommended that you set the metadata.name value to default, although you can use another name if preferred. The deployed switch server needs to be associated with your App Connect Designer and App Connect Dashboard instances in the same namespace to enable you to configure callable flow or private network connectivity.

  • Three configuration objects of type Agentx, AgentA, and Private Network Agent are created by default for the switch server to help you configure connectivity with callable flows or a private network. The default configuration objects are named in the format metadata.name-agentx, metadata.name-agenta, and metadata.name-privatenetworkagent, where metadata.name is the switch server name. You can view these configuration objects from the Operator deployment in the Red Hat OpenShift web console or in your Dashboard instance. For more information about configuration objects, see Configuration types.
  • App Connect Designer requirements:

    To create and run callable flows in the Designer instance or create flows that interact with applications in a private network, you need to manually configure the Designer instance to use an existing switch server by using the spec.switchServer.name setting. You can apply this setting while creating the Designer instance or by updating its custom resource settings as described in Creating an instance and Updating the custom resource settings for an instance.

    Tip: If the metadata.name value for the switch server is set to default, the Designer instance is automatically configured to use the switch server even if spec.switchServer.name is not explicitly specified.
  • App Connect Dashboard requirements:
    • For integrations that run callable flows, the switch server is automatically associated with the Dashboard instance. You set up the required connectivity by ensuring that the Agentx configuration object is selected when you create the integration server or integration runtime.
    • To deploy integrations that interact with applications in a private network, you need to manually configure the Dashboard instance to use an existing switch server by using the spec.switchServer.name setting. You can apply this setting while creating the Dashboard instance or by updating its custom resource settings as described in Creating an instance and Updating the custom resource settings for an instance. To set up the required connectivity to the private network, you can either use the default Private Network Agent configuration object or create one of your own.

Creating an instance from the Red Hat OpenShift web console

To create a switch server by using the Red Hat OpenShift web console, complete the following steps:

  1. From a browser window, log in to the Red Hat OpenShift Container Platform web console. Ensure that you are in the Administrator perspective Administrator perspective of the web console.
  2. From the navigation, click Operators > Installed Operators.
  3. If required, select the namespace (project) in which you installed the IBM App Connect Operator.
  4. From the Installed Operators page, click IBM App Connect.
  5. From the Operator details page for the App Connect Operator, click the Switch Server tab.
  6. Click Create SwitchServer.

    From the Details tab on the Operator details page, you can also locate the Switch Server tile and click Create instance to specify installation settings for the switch server.

  7. To use the Form view, ensure that Form view is selected and then complete the fields. Note that some fields might not be represented in the form.
    • Name: Enter a short distinctive name that uniquely identifies this switch server; for example, default.
    • Channel or version: Select an App Connect product (fix pack) version that the switch server is based on. You can select a channel that will resolve to the latest fully qualified version on that channel, or select a specific fully qualified version. If you are using IBM App Connect Operator 7.1.0 or later, the supported channels or versions will depend on the Red Hat OpenShift version that is installed in your cluster. For more information about these values, see spec.version values.
    • Accept: Review the preferred license in the supplied link and then click this check box to accept the terms and conditions.
    • License LI: Select a license identifier that aligns with the channel or the fully qualified version that you selected. For more information, see Licensing reference for IBM App Connect Operator.
    • License use: Select an appropriate CloudPakForIntegration or AppConnectEnterprise license type that you are entitled to use.
  8. Optional: For a finer level of control over your installation settings, click YAML view to switch to the YAML view. Update the content of the YAML editor with the properties and values that you require for this switch server.
  9. Click Create to start the deployment. An entry for the switch server is shown in the SwitchServers table, initially with a Pending status.
  10. Click the switch server name to view information about its definition and current status.

    On the Details tab of the page, the Conditions section reveals the progress of the deployment. You can use the breadcrumb trail to return to the (previous) Operator details page for the App Connect Operator.

    When the deployment is complete, the status is shown as Ready in the SwitchServers table.

Creating an instance from the Red Hat OpenShift CLI or Kubernetes CLI

To create a switch server by using the Red Hat OpenShift CLI, complete the following steps.

Note: These instructions relate to the Red Hat OpenShift CLI, but can be applied to a Kubernetes environment by using the relevant command to log in to the cluster, and substituting oc with kubectl where appropriate.
  1. From your local computer, create a YAML file that contains the configuration for the switch server that you want to create. Include the metadata.namespace parameter to identify the namespace in which you want to create the switch server; this should be the same namespace where the other App Connect instances or resources are created.
    OpenShift-only contentExample 1:
    apiVersion: appconnect.ibm.com/v1beta1
    kind: SwitchServer
    metadata:
      name: default
      namespace: myNamespace
    spec:
      license:
        accept: true
        license: L-QECF-MBXVLU
        use: CloudPakForIntegrationNonProduction
      version: '12.0'
    Kubernetes-only contentOpenShift-only contentExample 2:
    apiVersion: appconnect.ibm.com/v1beta1
    kind: SwitchServer
    metadata:
      name: default
      namespace: myNamespace
    spec:
      license:
        accept: true
        license: L-QECF-MBXVLU
        use: AppConnectEnterpriseProduction
      version: '12.0'
  2. Save this file with a .yaml extension; for example, switch_cr.yaml.
  3. From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
  4. Run the following command to create the switch server. (Use the name of the .yaml file that you created.)
    oc apply -f switch_cr.yaml
  5. Run the following command to check the status of the switch server and verify that it is running:
    oc get switchservers -n namespace

    You should see output similar to this. (If you need to configure an integration server or integration runtime later to use this switch server for callable flows, you can specify the AGENTCONFIGURATIONNAME value as a configuration object.)

    NAME      RESOLVEDVERSION   CUSTOMIMAGES   STATUS   AGENTCONFIGURATIONNAME   AGE
    default   12.0.12.0-r1      false          Ready    default-agentx           3m2s
    Note: If you are using a Kubernetes environment, ensure that you create an ingress definition after you create this instance, to make its internal service publicly available. For more information, see Creating ingress definitions for external access to your IBM App Connect instances.

Updating the custom resource settings for an instance

If you want to change the settings of an existing switch server, you can edit its custom resource settings by using the Red Hat OpenShift web console or CLI, or the CLI for a Kubernetes environment. For example, you might want to change the log level for the container logs or apply custom labels to the pods.

Restriction: You cannot update standard settings such as the kind of resource (kind), and the name and namespace (metadata.name and metadata.namespace), as well as some system-generated settings, or settings such as the storage type of certain components. An error message will be displayed when you try to save.

Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).


Updating an instance from the Red Hat OpenShift web console

To update a switch server by using the Red Hat OpenShift web console, complete the following steps:

  1. From a browser window, log in to the Red Hat OpenShift Container Platform web console. Ensure that you are in the Administrator perspective Administrator perspective of the web console.
  2. From the navigation, click Operators > Installed Operators.
  3. If required, select the namespace (project) in which you installed the IBM App Connect Operator.
  4. From the Installed Operators page, click IBM App Connect.
  5. From the Operator details page for the App Connect Operator, click the Switch Server tab.
  6. Locate and click the name of the switch server that you want to update.
  7. Click the YAML tab.
  8. Update the content of the YAML editor as required. You can update existing values, add new entries, or delete entries. For information about the available parameters and their values, see Custom resource values.
  9. Click Save to save your changes.

Updating an instance from the Red Hat OpenShift CLI or Kubernetes CLI

To update a switch server from the Red Hat OpenShift CLI, complete the following steps.

Note: These instructions relate to the Red Hat OpenShift CLI, but can be applied to a Kubernetes environment by using the relevant command to log in to the cluster, and substituting oc with kubectl where appropriate.
  1. From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
  2. From the namespace where the switch server is deployed, run the oc edit command to partially update the instance, where instanceName is the name (metadata.name value) of the instance.
    oc edit switchserver instanceName

    The switch server CR automatically opens in the default text editor for your operating system.

  3. Update the contents of the file as required. You can update existing values, add new entries, or delete entries. For information about the available parameters and their values, see Custom resource values.
  4. Save the YAML definition and close the text editor to apply the changes.
Tip:

If preferred, you can also use the oc patch command to apply a patch with some bash shell features, or use oc apply with the appropriate YAML settings.

For example, you can save the YAML settings to a file with a .yaml extension (for example, updatesettings.yaml), and then run oc patch as follows to update the settings for an instance:

oc patch switchserver instanceName --type='merge' --patch "$(cat updatesettings.yaml)"

Deleting an instance

If no longer required, or if you encounter connection or other issues, you can delete a switch server by using the Red Hat OpenShift web console or CLI, or the CLI for a Kubernetes environment. (For information about troubleshooting issues, see Troubleshooting switch server connectivity.)

Ensure that you have cluster administrator authority or have been granted the appropriate role-based access control (RBAC).

Deleting an instance from the Red Hat OpenShift web console

To delete a switch server by using the Red Hat OpenShift web console, complete the following steps:

  1. From a browser window, log in to the Red Hat OpenShift Container Platform web console. Ensure that you are in the Administrator perspective Administrator perspective of the web console.
  2. From the navigation, click Operators > Installed Operators.
  3. If required, select the namespace (project) in which you installed the IBM App Connect Operator.
  4. From the Installed Operators page, click IBM App Connect.
  5. From the Operator details page for the App Connect Operator, click the Switch Server tab.
  6. Locate the instance that you want to delete.
  7. Click the options icon (Options menu) to open the options menu, and then click the Delete option.
  8. Confirm the deletion.

Deleting an instance from the Red Hat OpenShift CLI or Kubernetes CLI

To delete a switch server by using the Red Hat OpenShift CLI, complete the following steps.

Note: These instructions relate to the Red Hat OpenShift CLI, but can be applied to a Kubernetes environment by using the relevant command to log in to the cluster, and substituting oc with kubectl where appropriate.
  1. From the command line, log in to your Red Hat OpenShift cluster by using the oc login command.
  2. From the namespace where the switch server instance is deployed, run the following command to delete the instance, where instanceName is the value of the metadata.name parameter.
    oc delete switchserver instanceName

Custom resource values

The following table lists the configurable parameters and default values for the custom resource.

Parameter Description Default

apiVersion

The API version that identifies which schema is used for this switch server.

appconnect.ibm.com/v1beta1

kind

The resource type.

SwitchServer

metadata.name

A unique short name by which the switch server can be identified. It is recommended that you set this value to default.

metadata.namespace

The namespace (project) in which the switch server is installed.

The namespace in which you create an instance or object must be no more than 40 characters in length.

spec.affinity

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

Specify custom affinity settings that will control the placement of pods on nodes. The custom affinity settings that you specify will completely overwrite all of the default settings. (The current default settings are shown after this table.)

Custom settings are supported only for nodeAffinity. If you provide custom settings for nodeAntiAffinity, podAffinity, or podAntiAffinity, they will be ignored.

For more information about spec.affinity.nodeAffinity definitions, see Controlling pod placement on nodes using node affinity rules in the OpenShift documentation and Assign Pods to Nodes using Node Affinity in the Kubernetes documentation.

spec.annotations

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

Specify one or more custom annotations (as arbitrary metadata) to apply to each pod that is created during deployment. Specify each annotation as a key/value pair in the format key: value. For example:

spec:
  annotations:
    key1: value1
    key2: value2

The custom annotations that you specify are merged with the default (generated) annotations. If duplicate annotation keys are detected, the custom value overwrites the default value.

spec.labels

(Only applicable if spec.version resolves to 11.0.0.10-r1 or later)

Specify one or more custom labels (as classification metadata) to apply to each pod that is created during deployment. Specify each label as a key/value pair in the format labelKey: labelValue. For example:

spec:
  labels:
    key1: value1
    key2: value2

The custom labels that you specify are merged with the default (generated) labels. If duplicate label keys are detected, the custom value overwrites the default value.

spec.license.accept

An indication of whether the license should be accepted.

Valid values are true and false. To install, this value must be set to true.

false

spec.license.license

See Licensing reference for IBM App Connect Operator for the valid values.

spec.license.use

See Licensing reference for IBM App Connect Operator for the valid values.

spec.logFormat

The format used for the container logs that are output to the container's console.

Valid values are basic and json.

basic

spec.pod.containers.runtime.image

The path to the Docker image.

spec.pod.containers.runtime.imagePullPolicy

Indicate whether you want images to be pulled every time, never, or only if they're not present.

Valid values are Always, Never, and IfNotPresent.

IfNotPresent

spec.pod.containers.runtime.livenessProbe.failureThreshold

The number of times the liveness probe (which checks whether the container is still running) can fail before taking action.

1

spec.pod.containers.runtime.livenessProbe.initialDelaySeconds

How long to wait (in seconds) before starting the liveness probe, which checks whether the container is still running. Increase this value if your system cannot start the container in the default time period.

360

spec.pod.containers.runtime.livenessProbe.periodSeconds

How often (in seconds) to perform the liveness probe that checks whether the container is still running.

10

spec.pod.containers.runtime.livenessProbe.timeoutSeconds

How long (in seconds) before the liveness probe (which checks whether the container is still running) times out.

5

spec.pod.containers.runtime.readinessProbe.failureThreshold

The number of times the readiness probe (which checks whether the container is ready) can fail before taking action.

1

spec.pod.containers.runtime.readinessProbe.initialDelaySeconds

How long to wait (in seconds) before starting the readiness probe, which checks whether the container is ready.

10

spec.pod.containers.runtime.readinessProbe.periodSeconds

How often (in seconds) to perform the readiness probe that checks whether the container is ready.

5

spec.pod.containers.runtime.readinessProbe.timeoutSeconds

How long (in seconds) before the readiness probe (which checks whether the container is ready) times out.

3

spec.pod.containers.runtime.resources.limits.cpu

The upper limit of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

When you create a switch server instance, no CPU limits are set on the resource if spec.version resolves to 12.0.7.0-r1 or later. If required, either use spec.pod.containers.runtime.resources.limits.cpu to set the CPU limits, or configure CPU limits for the namespace as described in Configure Memory and CPU Quotas for a Namespace in the Kubernetes documentation.

1 (when spec.version is 12.0.6.0-r1 or earlier)

spec.pod.containers.runtime.resources.limits.memory

The memory upper limit (in bytes) that is allocated for running the container. Specify integers with suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

512Mi

spec.pod.containers.runtime.resources.requests.cpu

The minimum number of CPU cores that are allocated for running the container. Specify integer, fractional (for example, 0.5), or millicore values (for example, 100m, equivalent to 0.1 core).

250m

spec.pod.containers.runtime.resources.requests.memory

The minimum memory (in bytes) that is allocated for running the container. Specify integers with one of these suffixes: E, P, T, G, M, K, or power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

256Mi

spec.pod.imagePullSecrets.name

The secret used for pulling images.

spec.version

The product version that the switch server is based on. Can be specified by using a channel or as a fully qualified version. If you specify a channel, you must ensure that the license aligns with the latest fully qualified version in the channel.

If you are using IBM App Connect Operator 7.1.0 or later, the supported channels or versions will depend on the Red Hat OpenShift version that is installed in your cluster.

To view the available values that you can choose from, see spec.version values.

12.0

Default affinity settings

The default settings for spec.affinity are as follows. Note that the labelSelector entries are automatically generated.

You can overwrite the default settings for spec.affinity.nodeAffinity with custom settings, but attempts to overwrite the default settings for spec.affinity.podAntiAffinity will be ignored.
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/arch
            operator: In
            values:
            - amd64
            - s390x
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchLabels:
              <copy of the pod labels>
          topologyKey: kubernetes.io/hostname
        weight: 100