IBM FileNet Content Manager adopts a cloud strategy that is based on Docker containers and support for Kubernetes Docker orchestration platforms such as IBM Cloud Private. By combining containers and Kubernetes, FileNet customers can take their enterprise content management platform to the next level, speeding up production deployments, gaining dynamic scalability, improving resiliency, increasing uptime, reducing total cost of ownership (TCO) and boosting overall return on investment (ROI).
All FileNet Content Manager containerized components, such as FileNet Content Platform, are fully built and ready to deploy. Think of Docker containers as an advanced version of a virtual image with better portability and runtime efficiencies. FileNet Content Manager containers use IBM WebSphere Liberty, an application server built for the cloud.
Liberty has a smaller footprint because, unlike traditional WebSphere Application Server, with Liberty, users install and enable only the features they specifically need and use. This streamlining means applications that are running on Liberty have quicker startup times and improved efficiencies during runtime. When comparing FileNet Content Platform startup time on traditional WebSphere Application Server versions versus Liberty, Liberty startups are up to three times faster.
Kubernetes is now the Docker container orchestration of choice because of its success in container deployments and the support it receives from the Cloud Native Computing Foundation and other major cloud companies, including IBM. Kubernetes provides the ability to autoscale applications that are deployed on its platform simply by creating an autoscale policy. This feature gives applications such as IBM Content Navigator the ability to dynamically scale out as workload increases.
For example, during idle time, only two IBM Content Navigator pods or containers might be running. But at the beginning of the workday, when users start logging into IBM Content Navigator, the number of IBM Content Navigator containers can increase to accommodate the workload. The autoscale policy can also scale when the workday ends and the workload no longer demands the additional containers. This scaling provides better and more efficient use of infrastructure resources and reduces the need for a DevOps team to constantly monitor workload.
Additionally, Kubernetes kubelet process running on all nodes provide proactive monitoring of pods or containers. If the containers or pods stop, the kubelet process automatically reinstantiates the container. If the worker node stops, the containers running on that worker node are started on other available worker nodes. The kubelet process also can perform health checks by using a liveness probe and can automatically terminate and reinstantiate the container if the liveness check fails. This capability ensures that containers running on the platform can serve requests and not get stuck in a hung or “zombie” state.
Finally, Kubernetes supports rolling updates for applications. Simply running a single Kubernetes CLI command to initiate a new deployment of a new image version results in the FileNet Content Manager components being patched or updated, which means zero downtime.
See these capabilities in action.