High availability considerations

Learn about the high availability options in the FileNet Content Manager container environment.

High availability topology

The following image shows a high availability topology for FileNet Content Manager container environment.

FNCM high availability topology

To configure your environment for high availability:

  1. Configure your pods across different worker nodes.
  2. Set up storage that includes high availability features, for example, Red Hat OpenShift Data Foundation (previously Red Hat OpenShift Container Storage).
  3. Deploy your control planes and worker nodes across multiple availability zones.
  4. Distribute your virtual machine or hardware across multiple availability zones or locations. For example, you can host your virtual machine across a private and public cloud.

The following diagram shows what your deployment might look like when you distribute your resources across multiple availability zones. The example deployment includes automatic recovery, no data loss, and data mirroring. Zones 1-3 in the following diagram are also known as availability zones or fault domains. Each zone has a control plane and worker nodes in Openshift Container Platform, and a storage instance in Openshift Container Storage.

FNCM example high availability topology

The load balancer in the diagram uses the following synchronous architecture to minimize local data center failures:

  • The disaster recovery sites and availability zones can be connected by a metropolitan area network (MAN) or a campus area network (CAN) network.
  • The availability zones are mapped to a fault domain.
  • An odd number of availability zones or fault domains are needed for the cluster quorum.
  • Network latency between zones usually does not exceed 5 milliseconds.
  • Red Hat OpenShift Container Platform ensures that:
    • Pods and nodes are scheduled across zones during deployment.
    • Data that is stored in each Red Hat OpenShift Data Foundation instance is consistent across each zone.
    • Applications are automatically recovered without disruption across zones.

High availability in FileNet P8 container components

Depending on the components that you choose and your architecture, built-in or configurable high availability features are available.

  • PostgreSQL database

    The PostgreSQL database clusters have multiple replicas. For more information about the cluster architecture, see Architecture. One database pod is designated as the primary database service, while the other database pods are in a standby status and sync with the data in the primary database server. In the situation where the primary database is unavailable, Kubernetes automatically moves the service to another instance of the database cluster, which becomes the new primary database. Production deployments use 2 replicas by default, where 1 is the primary and the other is the backup. Starter deployments have 1 replica only and do not support high availability.

    Diagram showing the high availability architecture for a PostgreSQL database

Differences between high availability in FileNet P8 container and on-premises environments

The following table shows the differences between high availability in FileNet P8 container and on-premises environments.

Table 1. Differences between high availability in IBM FileNet Content Manager in container environment and in an on-premises environment
Feature For an on-premises WebSphere® Application Server cluster For a container environment
Multiple IBM FileNet® Content Manager server instances The server instances are based on the traditional WebSphere Application Server Network Deployment cluster. If the WebSphere Application Server Java virtual machine (JVM) crashes, it is restarted automatically. The node agent manages the crash. The server instances are based on the Liberty server. Crashed pods are restarted automatically and managed by the Kubernetes Kubelet.
HTTP load balancer The load balancer is provided by the IBM HTTP Server or another load balancer. The load balancer is provided by the router in Red Hat OpenShift or Kubernetes.
Session affinity Session affinity or session persistence is provided by the IBM HTTP Server WebSphere Application Server plug-in or provided through the load balancer. Session affinity is provided by Red Hat OpenShift and Kubernetes.
Workload manager Workload Manager provides high availability for Enterprise JavaBeans protocol routing to cluster members. Not available in this environment. Enterprise JavaBeans protocol is not used.
Logging Log files are stored on the file system. Pod and log files can be forwarded to a logging service such as Elasticsearch, Logstash, and Kibana (ELK) or Elasticsearch, Fluentd, and Kibana (EFK).
Health Check The node agent and IBM HTTP Server WebSphere Application Server plug-in monitor the health status of the application server instance. Red Hat OpenShift or the Kubernetes kubelet process maintains the configured state. The liveness and readiness probes monitor the health status of the application inside the container.
Scalability Scale the existing cluster resources by using a dynamic cluster. Scale the existing cluster resources by using the custom resource.