Load balancing

Use multiple protocol nodes to provide higher throughput and performance of Object storage by allowing more requests in parallel and distributing the workload among all protocol nodes.

Certain steps might be necessary to ensure that client requests are distributed among all protocol nodes. Object store endpoint URLs stored in keystone contain the destination host name or IP addresses. Therefore, it is important to ensure that requests that are sent to that destination are distributed among all the protocol addresses. This endpoint address is the value that is specified by the --endpoint parameter when you set up Object storage with the installation toolkit.

If a host name is resolved to a single protocol IP address (such as 192.168.1.1), then all client requests are sent to the protocol node associated with that specific address. Make sure that requests are distributed among all the protocol nodes instead of just a single protocol node.

To accomplish this you can use a load balancer, such as HAProxy. In this case, the destination for the endpoint is configured to resolve to the load balancer, and the load balancer forwards incoming requests to one of the protocol nodes (based on a balancing strategy). By configuring the load balancer with all of the protocol IP addresses, the client requests can be sent to the entire set of protocol nodes.

Another common solution is to use a DNS Round Robin. In this case, a DNS server is configured so a single host name is associated with a set of addresses. When a client does a DNS lookup of the host name, one address from the set is returned. By configuring the DNS Round Robin service to associate the set of protocol IP addresses with the endpoint host name, client requests are distributed among all protocol nodes. As each client initiates a request to the endpoint hostname, the DNS Round Robin service returns an IP address by cycling through the set of protocol IP addresses.