Technical overview

An overview of the component and network architcture for the Operations Dashboard.

Attention: Effective with IBM Cloud Pak® for Integration 2022.4.1, the integration tracing capability (IBM Cloud Pak for Integration Operations Dashboard) is deprecated. This capability will be removed in a future release. No further updates will be provided. No new uses of the Operations Dashboard should be implemented. Users who want to implement tracing should use Instana observability. For more information, see Enabling IBM Instana monitoring.

Main components

IBM Cloud Pak for Integration Operations Dashboard deployment includes the following main components:

  • Front end pods, which include containers that provide the user interface and API endpoints:

    • ui-manager: Operations Dashboard Web Console

    • ui-proxy: Front end proxy (HTTP server)

    • registration-endpoint: HTTP server receiving integration capabilities registration requests

    • registration-processor: Responsible for processing integration capabilities registration requests

    • legacy-ui: Jaeger UI for support and debugging purposes, not intended for users

  • Scheduler pods, which include containers that schedule and distribute background tasks such as alerts, reports, housekeeping, and so on:

    • scheduler: Responsible for scheduling and distributing tasks

    • api-proxy: API proxy (HTTP server)

  • Job pods, which include containers that execute reports and alerts tasks:

    • job: Responsible for executing reports and alerts

    • api-proxy: API proxy (HTTP server)

  • Housekeeping pods, which include containers that execute internal system wide tasks:

    • housekeeping: Responsible for executing the tasks

    • api-proxy: API proxy (HTTP server)

  • Store master node pods, which include a container of Elasticsearch that manages the Store cluster (named store).

  • Store data node pods, which include a container of Elasticsearch that stores tracing data (named store).

  • Configuration Databse pods, which include a container that runs the operational and configuration database (named config-db).

Network policies

The following table provides information about the permitted incoming network connections to Operations Dashboard pods. The traffic is enforced by NetworkPolicy objects.

Table 1. Incoming network connections

Source Destination Type Description
Namespace of OpenShift Container Platform Ingress Controller pods (labeled with network.openshift.io/policy-group: ingress) Operations Dashboard front end pods TCP 8443 (TLS) User's browser accessing Operations Dashboard Web Console
Deployed capabilities (all namespaces) Operations Dashboard front end pods TCP 8090 (TLS) Capability registration
Same namespace Scheduler, jobs and housekeeping pods TCP 8443 (TLS) Operations Dashboard inter-pod communication
Deployed capabilities (all namespaces) Store master and data node pods TCP 9200 (TLS) Distributed tracing data
Same namespace Store master and data node pods TCP 9300 (TLS) Operations Dashboard store inter-pod communication
Same namespace Configuration Database pods TCP 1527 (TLS) Operations Dashboard inter-pod communication

The following table provides information about the outgoing network connections from Operations Dashboard pods. The outgoing network connectivity is not enforced by a NetworkPolicy object. If you choose to implement outgoing network restrictions, please consider the following connections.

Table 2. Outgoing network connections

Source Destination Type Description
Operations Dashboard front end pods ICP4I Platform Services TCP 3000 (TLS) Internal API invocations (required)
Operations Dashboard pods SMTP server as configured in SMTP system parameters