Creating load balancer resources
You can use different load-balancing strategies to manage requests to the Document Processing servers, clients, and the database.
About this task
When you configure load balancer constructs to control inter-process communications, consider the performance differences in the environment from frequently changing workloads. Overly aggressive time-outs can lead to premature severing of connections that can result in unwarranted failures. Keep-alive settings that do not keep the connection established long enough for operations to complete can cause premature closing of those connections.
timeout http-request 30s
timeout client 5m
timeout server 5m
timeout http-keep-alive 120s
Some load balancers can do an automatic resend when a service times out before returning control to the client. Because Document Processing includes the client as a necessary participant in any retry of an API-based operation, using an automatic retry feature that excludes the client is not recommended.
redispatch
option must be disabled for the components under the
Document Processing area:no option redispatch
Use the features of the load balancer and proxy server in your environment to localize these settings to the services which support the Document Processing components and avoid potential negative side effects the tuning might induce in other components.
Connections
router.openshift.io/sticky_cookie: -sticky_cookie_annotation
For tools that are accessed by a browser, for example the Document Processing Designer, a default cookie is created by OpenShift to enforce that the session is sticky. If the pod that the session is communicating with becomes unavailable, the session is automatically associated with another pod. A built-in lag in metadata synchronization across the Document Processing nodes means that administrative applications can experience issues if the request is automatically routed to a new pod. Errors might result and a manual retry of the failed operation might be necessary.
curl <URL for the ADP route> -k -c <full path to a file to write the cookie to>
curl http://project1-adp-route.apps.cluster.mycorp.com -k -c /tmp/sticky_cookie
Other Kubernetes platforms
For deployments on other certified Kubernetes platforms, a sticky connection (session affinity) can be achieved in a number of ways. The most common is through features that are provided by the Ingress controller associated with the Kubernetes cluster. A number of Ingress Controllers are available as documented on the kubernetes.io site.
For example, the NGINX Ingress controller provides a number of options as described in Sticky Sessions. Choose the method that provides for maximum stickyness.
What to do next
To create the required databases for Document Processing, see Creating the databases for Document Processing.