Zones aren't just for multiple data centers any more!
TedKirby 0600004ET8 Visits (3705)
Zones are a WebSphere eXtreme Scale feature for controlling shard placement in a grid. The classic example and use case for zones is when you have two (or more) data centers and you want your grid spread over them to allow for recovery from data center failure. In a typical case, you would define your grid to have 1 synchronous replica placed in the same data center as the primary shard, and 1 asynchronous replica placed in another data center. The thinking is that network access is fast inside each data center, but slower between data centers. You accomplish this placed by defining each of your data centers as a unique zone, and using zone rules to put primary and synchronous replica shards in the same zone, but asynchronous replica shards in a separate zone. (For a more detailed discussion, see this article. It starts with a discussion of using eXtreme Scale for HTTP Session Management, which is a great way to provide session failure to remote data centers. Zones are discussed at the end of the article.)
We have recently come across two additional use cases for zones.
1. Rolling Upgrades. Say for operational reasons, you want to applying rolling upgrades to your machines. Say you want to apply maintenance that requires restart. Let's say you have a grid spread across 20 machines. This grid is defined with 1 synchronous replica. You want to shut down 10 of the machines at a time for the maintenance. When you shut down 10 machines, you want to insure that no partition has both its primary and replica shards on the machines being shut down, otherwise you will lose that partition's data from your grid. How do we do it?
Zones to the rescue! Put 10 machines in zone1 and the other 10 in zone2. With zone rules, put primaries in zone1 and replicas in zone2. Problem solved! This configuration insures that, for each partition, its primary and replica shards are in different zones. You can then restart each zone of 10 machines sequentially for the upgrade and not lose any grid data.
However, this is not quite satisfactory, since when you have only one zone operational, you have no replica shards, and hence no backup if a machine fails unexpectedly. Not to worry. Let's define 3 zones, of 7, 7 and 6 machines, respectively. Let's also define 2 replica shards instead of 1. Using zone rules, we insure that for each partition, its primary and two replica shards are each in separate zones. Now, when we shut down any one zone (6 or 7 machines), each partition will have a surviving primary and replica partition, each in a separate zone. When all machines are back up after cycling through all the zones, the grid will restore itself so that each partition has 3 shards, a primary and 2 replicas, each in separate zones.
2. Virtualization. Cloud computing and virtualization are all the rage these days. By default, eXtreme Scale insures that two shards for the same partition are never placed on the same IP address. Especially when deploying on virtual images (say with VMware), many server instances, each with a unique IP address, may be run on the same physical machine. How can we insure that replicas may be placed on separate physical machines? Once again, zones can be used to solve the problem. Group your physical machines into zones, and use zone placement rules to keep primary and replica shards in separate zones.
These are but two examples of new uses for zones. If you have more examples, please let me know!
For information and links to all things eXtreme Scale, see the Getting Started with eXtreme Scale Wiki.
Get involved with the WebSphere Emerging Technologies Community.