Application Level Failover (Active/Active Setup) in a Clustered Installation
Another failover approach is to create a cluster in which the servers that participate are “active/active,” meaning that both nodes are working all the time, sharing the workload.
This approach can be implemented at a higher level in the software stack, meaning that it generally does not have to be specific to any particular platform/operating system and usually does not require deep technical knowledge to configure. It also provides instant failover because both nodes are always active. This approach can be more cost effective than others, because the same hardware that provides failover also provides increased performance (except when there is a failure).
Clustering provides this kind of solution. All of the nodes participating in the cluster are available all the time. In an application cluster, nothing is required to be shared between the nodes, except for the database or (if the file system storage is used) the file system. Everything else can be configured separately. Performance and availability for some kinds of business processes can be enhanced by using a clustered file system like GPFS, Systina or Lustre (for example).
Failover and clustering can be combined to give each node in a cluster a “failover node.” This approach works very well when combined with test/development servers and/or disaster recovery. While complex, this kind of arrangement can provide extremely high levels of availability and provide for other needed services.
- Web servers
- Database servers
- Application server
In a cluster, administration and performance are enhanced by using a shared installation disk, with the disk for directories used by file system adapters. This can be accomplished through the use of a cluster file system, such as CFS, GPFS, or Sistina, or through any one of many highly available file servers (HA-NFS, for example) or Network Attached Storage devices with failover.
When using NFS to share files in a clustered environment, you must take care to reduce NFS use where possible. You can arrange the NFS directory organization to provide for local disk access in all cases except the initial step when a business process is distributed to a new cluster node. Organize the NFS directory by pointing the document_dir property in jdbc.properties to the local disk on each box and cross-mounting the local disk to the other cluster nodes. The document_dir property is used to tell the system where to write files when file system storage is used. When the file is read, the full path to the file is used instead of the document_dir property.
For example, in the following graphic, when a file is written on Node1, the Doc_dir on Node1 is
/data/node1, so the full path to the file is /data/node1/file1, which is stored in the database.
When either node reads the payload, the full path is read from the database and the file is accessed
by that path. This works as long as both nodes have access to the path /data/node1.