Optional: Creating load balancer resources for content services containers
You can use different load-balancing strategies to manage requests to the Content Platform Engine, clients, and the database.
About this task
When you configure load balancer constructs to control inter-process communications, consider the performance differences in the environment from frequently changing workloads. Overly aggressive time-outs can lead to premature severing of connections that can result in unwarranted failures. Keep-alive settings that do not keep the connection established long enough for operations to complete can cause premature closing of those connections.
timeout http-request 30s
timeout client 5m
timeout server 5m
timeout http-keep-alive 120s
Some load balancers can do an automatic resend when a service times out before returning control to the client. Because the Content Platform Engine includes the client as a necessary participant in any retry of an API-based operation, using an automatic retry feature that excludes the client is not recommended.
redispatch
option must be disabled for the components under the FileNet® Content Manager area:no option redispatch
Use the features of the load balancer and proxy server in your environment to localize these settings to the services which support the FileNet Content Manager components and avoid potential negative side effects the tuning might induce in other components.
Connections
router.openshift.io/sticky_cookie: -sticky_cookie_annotation
For tools that are accessed by a browser, for example the Administration Console for Content Platform Engine, a default cookie is created by OpenShift to enforce that the session is sticky. If the pod that the session is communicating with becomes unavailable, the session is automatically associated with another pod. A built-in lag in metadata synchronization across the Content Platform Engine nodes means that administrative applications such as the Administration Console for Content Platform Engine and the FileNet® Deployment Manager can experience issues if the request is automatically routed to a new pod. Errors might result and a manual retry of the failed operation might be necessary.
curl <URL for the CPE route> -k -c <full path to a file to write the cookie to>
curl http://project1-cpe-route.apps.cluster.mycorp.com -k -c /tmp/sticky_cookie
Other Kubernetes platforms
For deployments on other certified Kubernetes platforms, a sticky connection (session affinity) can be achieved in a number of ways. The most common is through features that are provided by the Ingress controller associated with the Kubernetes cluster. A number of Ingress Controllers are available as documented on the kubernetes.io site.
For example, the NGINX Ingress controller provides a number of options as described in Sticky Sessions. Choose the method that provides for maximum stickyness.
Sample load balancer configuration
For more information on Ingress, see Setting up Ingress in the IBM Cloud® documentation
The following load balancer configuration is provided as an example. If you want to use a load balancer configuration, you must change this sample to create one that is appropriate for your cluster environment:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.bluemix.net/client-max-body-size: size=100m
ingress.bluemix.net/proxy-connect-timeout: serviceName=automation-cpe-svc timeout=300s;serviceName=automation-navigator-svc
timeout=300s;serviceName=automation-navigator-oidc-svc timeout=300s;serviceName=automation-graphql-svc
timeout=300s;serviceName=automation-cmis-svc timeout=300s;serviceName=automation-es-svc
timeout=300s;
ingress.bluemix.net/proxy-read-timeout: serviceName=automation-cpe-svc timeout=300s;serviceName=automation-navigator-svc
timeout=300s;serviceName=automation-navigator-oidc-svc timeout=300s;serviceName=automation-graphql-svc
timeout=300s;serviceName=automation-cmis-svc timeout=300s;serviceName=automation-es-svc
timeout=300s;
ingress.bluemix.net/redirect-to-https: "True"
ingress.bluemix.net/ssl-services: ssl-service=automation-cpe-svc;ssl-service=automation-navigator-svc;ssl-service=automation-navigator-oidc-svc;ssl-service=automation-graphql-svc;ssl-service=automation-cmis-svc;ssl-service=automation-es-svc;
ingress.bluemix.net/sticky-cookie-services: serviceName=automation-cpe-svc name=cp4acookie
expires=7300s path=/acce hash=sha1;serviceName=automation-navigator-svc name=cp4acookie
expires=7300s path=/navigator hash=sha1;serviceName=automation-graphql-svc name=cp4acookie
expires=7300s path=/content-services-graphql hash=sha1;serviceName=automation-es-svc
name=cp4acookie expires=7300s path=/contentapi hash=sha1;serviceName=automation-cmis-svc
name=cp4acookie expires=7300s path=/openfncmis_wlp hash=sha1;
ingress.kubernetes.io/force-ssl-redirect: "true"
kubernetes.io/ingress.class: nginx
generation: 1
name: automation-ingress
namespace: my_project
spec:
rules:
- host: <Ingress sub-domain>
http:
paths:
- backend:
serviceName: automation-cpe-svc
servicePort: 9443
path: /acce
- backend:
serviceName: automation-cpe-svc
servicePort: 9443
path: /P8CE
- backend:
serviceName: automation-cpe-svc
servicePort: 9443
path: /FileNet
- backend:
serviceName: automation-cpe-svc
servicePort: 9443
path: /wsi
- backend:
serviceName: automation-cpe-svc
servicePort: 9443
path: /peengine
- backend:
serviceName: automation-cpe-svc
servicePort: 9443
path: /pewsi
- backend:
serviceName: automation-cpe-svc
servicePort: 9443
path: /ibmccepo
- backend:
serviceName: automation-cpe-svc
servicePort: 9443
path: /restReg
- backend:
serviceName: automation-navigator-svc
servicePort: 9443
path: /sync
- backend:
serviceName: automation-navigator-svc
servicePort: 9443
path: /navigator
- backend:
serviceName: automation-graphql-svc
servicePort: 9443
path: /content-services-graphql
- backend:
serviceName: automation-cmis-svc
servicePort: 9443
path: /openfncmis_wlp
- backend:
serviceName: automation-es-svc
servicePort: 9443
path: /contentapi
tls:
- hosts:
- <Ingress subdomain>
secretName: <Secret Name>