I am referring to the standard form of database disaster recovery whereby the production system is duplicated at a second "standby" site. The changes made at the production site are made at the standby site keeping it more or less up to date with the production site. This hardware and software, often costing as much as the production system, only gets used in the case of a pretty major disaster when it takes over the function of the production site. Fortunately these events are quite rare. In my experience the rarity of the disaster scenario makes it psychologically difficult to spend all of that money on stuff that will "probably never get used". This can be an even more onerous decision when the database is clustered and several servers are involved. So what can we do?
Those clever folks at the IBM labs have done it again. Why not simply stretch the cluster out so that half is at one data center and half is at another. If one data center fails for whatever reason the application keeps going. The best ideas are usually simple! Of course we retain the existing features of purescale: High availability, capacity on demand etc. This is the Geographically Dispersed PureScale Cluster (GDPS). More details are available. in this white paper
pureScale on Linux
Matching: disaster X