Zone-preferred routing

With zone-preferred routing, you can define how WebSphere® eXtreme Scale directs transactions to zones.

You have control over where the shards of a data grid are placed. See Zones for replica placement to get more information about some basic scenarios and how to configure your deployment policy accordingly.

Zone-preferred routing gives WebSphere eXtreme Scale clients the capability to specify a preference for a particular zone or set of zones. As a result, client transactions are routed to preferred zones before attempting to route to any other zone.

Requirements for zone-preferred routing

Before attempting zone-preferred routing, ensure that the application is able to satisfy the requirements of your scenario.

Per-container partition placement is necessary to use zone-preferred routing. This placement strategy is a good fit for applications that are storing session data in the ObjectGrid. The default partition placement strategy for WebSphere eXtreme Scale is fixed-partition. Keys are hashed at transaction commit time to determine which partition houses the key-value pair of the map when using fixed-partition placement.

Per-container placement assigns your data to a random partition when the transaction commits time through the SessionHandle object. You must be able to reconstruct the SessionHandle object to retrieve your data from the data grid.

You can use zones to have more control over where primary shards and replica shards are placed in your domain. Using multiple zones in your deployment is advantageous when your data is in multiple physical locations. Geographically separating primaries and replicas is a way to ensure that catastrophic loss of one data center does not affect the availability of the data.

When data is spread across multiple zones, it is likely that clients are also spread across the topology. Routing clients to their local zone or data center has the obvious performance benefit of reduced network latency. Route clients to local zones or data centers when possible.

Configuring your topology for zone-preferred routing

Consider the following scenario. You have two data centers: Chicago and London. To minimize response time of clients, you want clients to read and write data to their local data center.

Primary shards must be placed in each data center so that transactions can be written locally from each location. Clients must be aware of zones to route to the local zone.

Per-container placement locates new primary shards on each container that is started. Replicas are placed according to zone and placement rules that are specified by the deployment policy. By default, a replica is placed in a different zone than its primary shard. Consider the following deployment policy for this scenario.

<?xml version="1.0" encoding="UTF-8"?>
<deploymentPolicy xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://ibm.com/ws/objectgrid/deploymentPolicy ../deploymentPolicy.xsd"
	xmlns="http://ibm.com/ws/objectgrid/deploymentPolicy">
	<objectgridDeployment objectgridName="universe">
		<mapSet name="mapSet1" placementStrategy="PER_CONTAINER"
			numberOfPartitions="3" maxAsyncReplicas="1">
			<map ref="planet" />
		</mapSet>
	</objectgridDeployment>
</deploymentPolicy>

Each container that starts with the deployment policy receives three new primaries. Each primary has one asynchronous replica. Start each container with the appropriate zone name. Use the -zone parameter if you are starting your containers with the startOgServer[Version 8.6 and later] or startXsServer script.

For a Chicago container server:

  • [Linux][Unix]
    startOgServer.sh s1 -objectGridFile ../xml/universeGrid.xml 
    -deploymentPolicyFile ../xml/universeDp.xml 
    -catalogServiceEndPoints MyServer1.company.com:2809 
    -zone Chicago
  • [Windows]
    startOgServer.bat s1 -objectGridFile ../xml/universeGrid.xml 
    -deploymentPolicyFile ../xml/universeDp.xml 
    -catalogServiceEndPoints MyServer1.company.com:2809 
    -zone Chicago
  • [Version 8.6 and later] [Linux][Unix]
    startXsServer.sh s1 -objectGridFile ../xml/universeGrid.xml 
    -deploymentPolicyFile ../xml/universeDp.xml 
    -catalogServiceEndPoints MyServer1.company.com:2809 
    -zone Chicago
  • [Version 8.6 and later] [Windows]
    startOgServer.bat s1 -objectGridFile ../xml/universeGrid.xml 
    -deploymentPolicyFile ../xml/universeDp.xml 
    -catalogServiceEndPoints MyServer1.company.com:2809 
    -zone Chicago

If your containers are running in WebSphere Application Server, you must create a node group and name it with the prefix ReplicationZone. Servers that are running on the nodes in these node groups are placed in the appropriate zone. For example, servers running on a Chicago node might be in a node group named ReplicationZoneChicago.

See Zones for replica placement for more information.

Primary shards for the Chicago zone have replicas in the London zone. Primary shards for the London zone have replicas in the Chicago zone.

Figure 1. Primaries and replicas in zones
Primaries and replicas in zones

Set the preferred zones for the clients. Provide a client properties file to your client Java virtual machine (JVM). Create a file named objectGridClient.properties and ensure that this file is in your classpath.

Include the preferZones property in the file. Set the property value to the appropriate zone. Clients in Chicago must have the following value in the objectGridClient.properties file:

preferZones=Chicago

The property file for London clients must contain the following value:

preferZones=London

This property instructs each client to route transactions to its local zone if possible. The topology asynchronously replicates data that is inserted into a primary shard in the local zone into the foreign zone.

Using the SessionHandle interface to route to the local zone

The per-container placement strategy does not use a hash-based algorithm to determine the location of your key-value pairs in the data grid. You must use SessionHandle objects to ensure that transactions are routed to the correct location when you are using this placement strategy. When a transaction is committed, a SessionHandle object is bound to the session if one has not already been set. The SessionHandle object can also be bound to the Session by calling the Session.getSessionHandle method before committing the transaction. The following code snippet shows a SessionHandle being bound before committing the transaction.

Session ogSession = objectGrid.getSession();

// binding the SessionHandle
SessionHandle sessionHandle = ogSession.getSessionHandle();

ogSession.begin();
ObjectMap map = ogSession.getMap("planet");
map.insert("planet1", "mercury");

// tran is routed to partition specified by SessionHandle
ogSession.commit();

Assume that the prior code was running on a client in your Chicago data center. The preferZones attribute is set to Chicago for this client. As a result, your deployment would route transactions to one of the primary partitions in the Chicago zone: partition 0, 1, 2, 6, 7, or 8.

The SessionHandle object provides a path back to the partition that is storing this committed data. The SessionHandle object must be reused or reconstructed and set on the Session to get back to the partition containing the committed data.

ogSession.setSessionHandle(sessionHandle);
ogSession.begin();

// value returned will be "mercury"
String value = map.get("planet1");
ogSession.commit();

The transaction in this code reuses the SessionHandle object that was created during the insert transaction. The get transaction then routes to the partition that holds the inserted data. Without the SessionHandle object, the transaction cannot retrieve the inserted data.

How container and zone failures affect zone-based routing

Generally, a client with the preferZones property set routes all transactions to the specified zone or zones. However, the loss of a container results in the promotion of a replica shard to a primary shard. A client that was previously routing to partitions in the local zone must retrieve previously inserted data from the remote zone.

Consider the following scenario. A container in the Chicago zone is lost. It previously contained primaries for partitions 0, 1, and 2. The new primary shards for these partitions are then placed in the London zone because the London zone hosted the replicas for these partitions.

Any Chicago client that is using a SessionHandle object that points to one of the failed-over partitions now reroutes to London. Chicago clients that are using new SessionHandle objects route to Chicago-based primaries.

Similarly, if the entire Chicago zone is lost, all replicas in the London zone are promoted to primaries. In this scenario, all Chicago clients route their transactions to London.