Super cluster to the rescue, Part 1: Techniques to achieve extreme application scalability

Scaling up with WebSphere Proxy Server and HTTP plug-in

What if you have an application for which the client load is extreme? Due to the shear number of clients or client requests, many application severs will be needed simply to handle the load. The common solution to such a problem would be to make use of IBM® WebSphere® Application Server Network Deployment clusters, but what if a cluster could not be made large enough to handle the required application load? This content is part of the IBM WebSphere Developer Technical Journal.

Share:

Kevin Kepros (kepros@us.ibm.com), Advisory Software Engineer, IBM

Kevin Kepros is an advisory software engineer at the IBM Software Development Lab in Rochester, Minnesota. Kevin was a lead developer on the WebSphere High Availability Manager component and a member of the WebSphere Clustering development team. Recently Kevin has taken a new position working with the IBM SPSS team on predictive analytics solutions.



Dr. Debasish Banerjee (debasish@us.ibm.com), WebSphere Consultant, IBM

Dr. Debasish Banerjee is presently a WebSphere consultant in IBM Software Services. He started his WebSphere career as the WebSphere internationalization architect. Extreme transaction processing, distributed cache, elastic computing, and cloud computing are his current areas of interest. Debasish received his Ph. D. in the field of combinator-based Functional Programming languages.



24 June 2009

Also available in Chinese Russian Japanese

Introduction

Application scalability is an important Quality of Service in most enterprise software topologies. For scalability, enterprise quality Java™ EE applications are commonly deployed and executed in IBM WebSphere Application Server Network Deployment clusters. However, the practical size of a cluster can be limited. What if a cluster could not be made large enough to handle the required application load?

This two-part article introduces a useful technique for achieving extreme application scalability in WebSphere Application Server that we will call the super cluster. Part 1 of this article introduces the "super cluster" technique as it applies to the HTTP plug-in and WebSphere Proxy Server. Part 2 will add a DMZ Secure Proxy Server for WebSphere Application Server, an IBM WebSphere Virtual Enterprise on demand router (ODR), and IBM WebSphere eXtreme Scale to the discussion.


Clusters

For scalability, enterprise quality Java EE applications are usually deployed and executed in WebSphere Application Server Network Deployment (hereafter referred to as Network Deployment) clusters. The client requests are routed across the cluster, thereby distributing the work load among all the application server processes that are members.

Figure 1. Client requests distributed across cluster members
Figure 1. Client requests distributed across cluster members
  • Affinity

    If the application is designed in a stateless manner, requests can be routed to any of the Network Depoloyment cluster members containing the deployed application (no request affinity). However, depending on the protocol and the application design, client requests can have an affinity to a specific Network Deployment cluster member. For example, an HTTP session might have been created on the cluster member that handled the very first request, thus all subsequent requests for that client should be sent back to the same cluster member. Some examples of affinities are HTTP session affinity for HTTP protocol, SIP session affinity for SIP protocol, transactional affinity for IIOP protocol, and so on. Most router components can maintain the appropriate affinities while forwarding requests to the cluster members (application servers).

  • Fail over

    In addition to scalability, the deployment of applications to a Network Deployment cluster can also provide high availability. If one cluster member fails, the router can direct client requests to the application on one of the other cluster members. Use of a session failover mechanism will provide a transparent failover even in the presence of HTTP or SIP sessions.

  • Administration

    Although it is theoretically possible to use non-clustered Network Deployment instances for attaining scalability in the above mentioned scheme, the use of Network Deployment clusters will provide significant advantages in administration. It is much easier to start, stop, install, uninstall, or update applications deployed in clusters when compared to applications deployed in a number of non-clustered Network Deployment instances. In fact, the administration of an application deployed in many non-clustered Network Deployment instances can be an error-prone activity.

Cluster size limit

IBM WebSphere Application Server V6.0 introduced a component known as the high availability manager (HAM). Along with this component came a new configuration attribute known as the core group. While discussion of the high availability manager and all of the associated function is beyond the scope of this article, the core group formation rules do affect cluster size limits, and this requires a basic understanding of core groups:

  • Core group

    A core group is a static high availability domain that consists of a set of tightly coupled WebSphere Application Server processes. Every process in a WebSphere Application Server cell is a member of a core group. Core group processes open network connections to each other and use those connections to monitor and determine whether a process is running, stopped, or has failed. All the members of a core group are connected to each other in a fully meshed topology, as shown in Figure 2.

    Figure 2. WebSphere Application Server processes in a core group
    Figure 2. WebSphere Application Server processes in a core group

    This tight coupling among JVMs provides low message latency (only one network hop between any member) and fast failure detection. However, the fully interconnected topology imposes restrictions on core group scalability. Thus, core groups do not scale to the same degree as a cell, and large cells will need to be divided into multiple core groups. Depending on the requirements of the specific WebSphere Application Server environment for inter-core group communication, individual core groups might be connected together using the core group bridge service (CGBS).

    Figure 3. Multiple core groups in a large WebSphere cell
    Figure 3. Multiple core groups in a large WebSphere cell
  • Core group formation rules

    Well formed core groups should be created following the core group formation rules, as documented in the WebSphere Application Server Information Center (see Resources). One of those formation rules states that a WebSphere Application Server cluster cannot span a core group. In other words, all members of a cluster must be members of the same core group. This rule implies that the maximum size of a WebSphere Application Server cluster is implicitly limited by the maximum size of a core group.

    Figure 4. Cluster must be a subset of a core group
    Figure 4. Cluster must be a subset of a core group
  • Core group size limit

    A core group might not function properly if it contains too many members. The exact limit for the number of core group members depends on several factors, including the available CPU resource, memory resource, network bandwidth, number of applications, type of applications, and so on. Therefore, it is impossible to define an absolute limit. For planning purposes, however, IBM provides these guidelines:

    • WebSphere Application Server V6.0.2: Consider multiple core groups as you approach 50 members.
    • WebSphere Application Server V6.1 or V7.0: Consider multiple core groups as you approach 100 members.

    Remember that these are simply guidelines and only testing in your own topology can determine your exact limit. Still, these guidelines then imply that a Network Deployment cluster should be limited to a maximum size of 50 to 100 members.

The conundrum

You have seen that the maximum size of a Network Deployment cluster is implicitly governed by the maximum size of a core group, and that means a cluster is restricted to a maximum size of 50 to 100 members, depending on hardware, topology, applications, and so on. This is a sizable number, and will provide sufficient scalability for the majority of applications. However, what if your application requires extreme scalability? What if the application’s scalability requirement dictates deployment to a cluster whose size is beyond the implicit limitation? How can you deploy the application to such a large number of Network Deployment instances while retaining most of the administrative advantages of the deployment in a cluster?

The answer is the super cluster.


Super cluster

While it is not very common for an application’s scalability requirement to exceed that which can be handled by a single WebSphere Application Server cluster, it can happen. For these scenarios, one technique that can be used to overcome the implicit cluster size limit is to define a super cluster or "cluster-of-clusters" topology. A super cluster is a hierarchical cluster that you can think of as a generalization of the classic WebSphere Application Server cluster.

Figure 5. Super cluster hierarchical cluster
Figure 5. Super cluster hierarchical cluster

At a certain level of abstraction in a clustered topology, some sort of router component forwards the client requests to the application that is deployed in a cluster. Consider the idea of deploying the application to more than one cluster, such that each cluster:

  • Belongs to its own core group.
  • Contains a reasonable number of members.

If a router can then be configured to forward client requests to the members of each cluster, then you have effectively addressed the cluster size limitation issue. Ideally, you would want the router to follow an appropriate load balancing strategy and also maintain the required server affinity. Thus, the quintessence of super clustering is to:

  • Deploy an application to multiple clusters (a cluster of clusters).
  • Use an appropriate router to distribute the client requests so that, from the viewpoint of the client, the two-level hierarchical cluster appears like a flattened single level traditional WebSphere Application Server cluster.

As you might imagine, super clusters do have some associated limitations:

  • At present, the super clustering technique can only be applied to the HTTP protocol, and not to other protocols, such as IIOP or SIP.
  • For the HTTP protocol, some routers can automatically forward requests to applications deployed in multiple clusters, while other routers require manual modification of the routing data to support the distribution of client requests across applications deployed in multiple clusters.
  • Unlike application deployment in a traditional cluster, application deployment in a super cluster is not necessarily a one step process, and can be a multi-step process.

As an example of how you might apply a super cluster, suppose an application needs to be run in a 120 member cluster in order to handle the required client load. For this example, you could conceivably create three new core groups, create a cluster of 40 members in each of these core groups, and deploy the application in these three clusters. Assuming the use of the familiar HTTP plug-in router, the resulting super cluster topology would look something like Figure 6.

Figure 6. Super cluster with 120 members
Figure 6. Super cluster with 120 members

As mentioned earlier, various kinds of routers can be used in a super cluster topology. The next sections offer specific details on how to configure and use different kinds of routers, along with their associated limitations and restrictions. All examples are based on WebSphere Application Server V7.


HTTP plug-in

This section explains how you can set up and configure a super cluster when the router used is the WebSphere Application Server HTTP plug-in. For the purpose of this discussion, refer to the simple two cluster example shown in Figure 7.

Figure 7. Super cluster with HTTP plug-in
Figure 7. Super cluster with HTTP plug-in

To keep the sample data within a reasonable size, the sample topology is intentionally limited. However, the techniques described can easily be extended for use in much larger topologies.

The basic steps involved in configuring a super cluster topology with the HTTP plug-in router are:

  1. Create a WebSphere Application Server cluster.
  2. Install the application to the cluster.
  3. Test and verify the application.
  4. Create additional WebSphere Application Server clusters.
  5. Map application modules to each cluster.
  6. Generate the HTTP plug-in file (plugin-cfg.xml).
  7. Edit the plugin-cfg.xml file to route requests across multiple clusters.
  8. Copy modified plugin-cfg.xml file to appropriate Web server location.

Most of these steps are standard WebSphere Application Server administration tasks (create cluster, install application, generate plug-in, and so on) and instructions for performing these tasks are well documented in the WebSphere Application Server Information Center. However, a couple of these steps -- namely steps 5 and 7 -- might require some explanation. Let’s take a closer look, then, at these "less familiar" steps and see what’s involved for this simple example.

Mapping application modules to multiple clusters

Consider two WebSphere Application Server clusters, called Cluster1 and Cluster2:

  • Each cluster contains two members (Figure 8).
    Figure 8. Two cluster topology
    Figure 8. Two cluster topology
  • "Ping" (Figure 9) is a simple Java EE application deployed in Cluster1.
    Figure 9. Ping application
    Figure 9. Ping application
  • The goal is to map the application modules from the Ping application to both Cluster1 and Cluster2. This is accomplished by using the administrative console to access the configuration page for the Ping application. To administer the application modules, select the Manage Modules link from the Ping application configuration page (Figure 10).
    Figure 10. Ping application configuration
    Figure 10. Ping application configuration
  • From the Manage Modules configuration panel (Figure 11), you should be able to map application modules to some or all the configured clusters and servers. To map the application modules to multiple targets:
    1. Select the desired module(s).
    2. Select one or more clusters or servers (hint: hold the CTRL key to select multiple targets).
    3. Click the Apply button.
    4. Review the mappings, and if satisfied click OK.
    5. Save and synchronize the changes.
    Figure 11. Ping application – manage modules
    Figure 11. Ping application – manage modules

Be aware that in addition to the two clusters, the Web module of the application is mapped to a Web server. This is because, in the example, the Web server is configured as an unmanaged node and WebSphere Application Server is used to administer the Web server.

Figure 11 depicts that the modules for the Ping application are mapped to both the configured clusters, Cluster1 and Cluster2. With the application modules mapped to multiple clusters, you can continue the process by generating and editing the plugin-cfg.xml file, covered next.

Editing the plugin-cfg.xml file

Once you have your application modules mapped to multiple clusters, the next step is to generate the plug-in configuration file (plugin-cfg.xml), which is used by the HTTP plug-in code for routing HTTP requests. Generating the HTTP plug-in configuration file is probably a familiar process if you have used the HTTP plug-in. In any event, it is a well documented process and so will not be covered in detail here. Essentially, there are options in the WebSphere Application Server administrative console to generate this file based on the configured topology. Figure 12 depicts a relevant segment of the plugin-cfg.xml generated from the application deployment topology where the Ping application is mapped to both the configured clusters. Notice that the plugin-cfg.xml file contains the definition of two clusters, each with two members.

Figure 12. plugin-cfg.xml with ServerCluster definition
Figure 12. plugin-cfg.xml with ServerCluster definition

The HTTP plug-in router will make routing decisions based on the client request (URL) and the information contained in the plugin-cfg.xml file. The client request supplies the HTTP plug-in router with this information:

  • Hostname
  • Port
  • URI

The HTTP plug-in implementation parses the contents of the plugin-cfg.xml file, looking for an appropriate target that can properly handle the client request. The HTTP router will send the request to the first matching target that it finds. For a cluster, the HTTP router will send the client requests to all of the Server elements embedded within the ServerCluster element. The router does not check whether the Server(s) are indeed from a configured WebSphere Application Server cluster. When routing requests, the HTTP router will use the specified LoadBalance strategy for load distribution while maintaining session affinity.

The main purpose of editing the plugin-cfg.xml file is to make the HTTP plug-in router believe that all the members of the hierarchical (super) cluster belong to a traditional (flattened) cluster. Thus, the editing involves these steps:

  1. Determine which cluster will be used by default for your application requests.
  2. Copy the server entries for the other cluster(s) over to the cluster in the plugin-cfg.xml file that was handling the client requests by default.

The easiest way to determine which cluster is being targeted is to simply test your application. Even though your application modules are mapped to multiple clusters, only one of the clusters will be used to service application requests by default. Typically, this will be the last cluster defined in plugin-cfg.xml. Once you know which cluster is the default target, you need to copy the server entries for the other clusters over to the default cluster.

  1. Copy and paste the Server elements from all the other ServerCluster elements into the corresponding ServerCluster element of the default target cluster.
  2. Copy and paste the Server Name elements from all of the other PrimaryServers elements into the corresponding PrimaryServers element of the default target cluster.

A relevant segment of the updated plugin-cfg.xml file is shown in Figure 13, which shows a merge of the members from Cluster1 and Cluster2 to create a super cluster.

Figure 13. plugin-cfg.xml - Super Cluster – Cluster2
Figure 13. plugin-cfg.xml - Super Cluster – Cluster2

In the example above, the default target cluster was Cluster2. Hence, the Server and ServerName elements are copied from Cluster1 into Cluster2. With the modified plugin-cfg.xml file in the proper place, the HTTP plug-in router will distribute requests across all four members of the super cluster.

For uniform load balancing across all the members of the super cluster, you can change the LoadBalanceWeight attributes of the super cluster members. In this particular example -- assuming all the super cluster members to be of similar capacity -- you might use 1000, 999, 998, and 997 as the values of the LoadBalanceWeight attribute for round robin load distribution.

Limitations

There are restrictions and limitations with any super cluster topology. When using the HTTP plug-in as the router you should be aware of these limitations:

  • Present WebSphere Application Server implementation only supports HTTP protocol for a super cluster.
  • The scheme only provides scalability without session failover:
    • No session replication support.
    • You can use WebSphere eXtreme Scale or session persistence for session fail over.
  • The technique requires manual administration of the plugin-cfg.xml file:
    • You must manually edit the plugin-cfg.xml file.
    • You must disable automatic plug-in file generation and propagation.
    • You must manually keep plugin-cfg.xml file in sync with topology changes.

The next section looks at the same sample scenario, but uses the WebSphere Proxy Server to distribute requests across a super cluster rather than the HTTP plug-in router.


Proxy server

Let’s look at how to set up and configure a super cluster when the router used is the WebSphere Proxy Server, using the same simple two cluster example, shown in Figure 14.

Figure 14. Super cluster with a proxy server
Figure 14. Super cluster with a proxy server

Along with other important functionalities, proxy servers forward HTTP or SIP requests to WebSphere Application Servers. A proxy server is just one of several server types that can be created using the WebSphere Application Server administrative console.

Figure 15. Proxy server
Figure 15. Proxy server

The steps involved in configuring a super cluster topology with a WebSphere Proxy Server are:

  1. Create a WebSphere Application Server cluster.
  2. Install the application to the cluster.
  3. Test and verify the application.
  4. Create additional WebSphere Application Server clusters.
  5. Map application modules to each cluster.
  6. Bridge all of the core groups containing the clusters and the core group containing the proxy server.

You’ll notice in these steps that there is no mention of generating or manually editing any type of router configuration file. This is because the proxy server’s routing information is obtained and updated dynamically. For super clustering, the advantage of using a proxy server in this configuration instead of the HTTP plug-in router is that there is no need to manually edit any static routing information. The trade-off is that all of the core groups containing the proxy server and any clusters where requests will be sent must be bridged together.

Request routing

The steps to map application modules to multiple clusters are discussed in detail in the earlier HTTP plug-in discussion, so the same will not be repeated here. For this example, if you have completed that task, then you’re ready to distribute client requests across the super cluster. There is no additional configuration required because this simple example has only a single core group. Congratulations, you now have a proxy server distributing requests across a super cluster!

The proxy server will automatically route HTTP requests to all members of the super cluster, while maintaining session affinity. There is no need to manually edit any routing file. The proxy server will obtain routing information dynamically via the high availability manager bulletin board service. The routing from a proxy server can be thought of as "application scoped," as opposed to the "cluster scoped" routing of the HTTP plug-in. The proxy sever will route to any WebSphere Application Servers in the cell where the application is deployed. This includes both cluster members and non-clustered application servers.

Core group bridge service

With this simple example, you did not have to deal with any core group bridge configuration. In a more realistic scenario, you would likely have multiple core groups and would be required to configure core group bridges to connect the various core groups together. This enables dynamic routing information to flow from the various clusters out to the proxy server where the routing decisions are made. If the proxy server and the target clusters are located in different core groups, then core group bridge configuration will be necessary (see Resources).

Limitations

Again, there are restrictions and limitations with any super cluster topology. When using the proxy server as the router, this is what you should be aware of:

  • For a super cluster, the proxy can only support routing HTTP protocol.
  • The scheme provides no session failover:
    • No session replication support.
    • You can use WebSphere eXtreme Scale or session persistence for session failover.
  • The technique might require core group bridge service (CGBS) configuration; CGBS is required if the proxy server and clusters (application deployment targets) are located in different core groups.

HTTP plug-in and proxy server

So far, you have seen how to configure a super cluster topology when the router is either the HTTP plug-in or the proxy server. Now, let’s examine some reasons why you might want to use of both types of routers together with your super cluster topology.

Security

Using a proxy server with a super cluster topology is convenient because there is no need to manually edit and administer a static route configuration file. For security reasons, however, some users might not want to place proxy servers out in the DMZ (demilitarized zone). To alleviate this concern, it is possible to configure Web servers (with the traditional HTTP plug-in router) in the DMZ that can forward requests to the proxy server(s) placed inside the protocol firewall (Figure 16).

Figure 16. Web server in DMZ routing to proxy servers
Figure 16. Web server in DMZ routing to proxy servers

With this configuration, Web servers in the DMZ will forward requests to the proxy cluster members using the traditional HTTP plug-in router. By default, the request distribution to the proxy servers is strictly round robin, and the Web servers do not have to consider HTTP session affinity. The members of the proxy cluster forward requests to the members of the super cluster. The proxy server maintains the HTTP session affinity when needed. There is an extra network hop involved (from the Web server to the proxy server), but in most cases this should not be a matter of great concern. However, one area of special consideration for such a topology is the generation of the HTTP plug-in configuration file.

Generating the plugin-cfg.xml file

To support routing to proxy servers, a special HTTP plug-in configuration file is required. The traditional plug-in configuration file contains the cell and application topology. In this scenario, the plug-in configuration file only needs to understand how to route to the proxy servers. The proxy server itself contains mechanisms for plug-in file generation and automatic distribution. To configure the generation of the plug-in file, you must access the Proxy settings link from the proxy server configuration page, shown in Figure 17.

Figure 17. Proxy configuration
Figure 17. Proxy configuration

The Proxy settings link shown here is for a single proxy server configuration. If you instead had multiple proxy servers contained in a proxy cluster, you would need to go to the proxy cluster configuration page to find this same Proxy settings link.

For this discussion, the settings you are concerned with are those that deal with plug-in configuration file generation and propagation. For these options, scroll down through the proxy server settings page until you reach the section called Proxy Plugin Configuration Policy (Figure 18). Here you will find two settings that deal with the HTTP plug-in configuration file:

  • Generate Plugin Configuration

    This value controls whether you want the proxy server to generate a plugin-cfg.xml file and the scope in which the generation occurs.

    Figure 18. Proxy Plugin Configuration Policy
    Figure 18. Proxy Plugin Configuration Policy

    For the scope of the plug-in configuration generation, you can specify none, All, Cell, Node, and Server. You can find the generated plug-in configuration file in the <profile home>/etc directory.

  • Plugin config change script

    In the Plugin config change script field, you can enter the name of a script file that you might want run once the generation is complete. You could use this option to automate the task of plug-in file propagation to the desired Web server(s). Figure 19 shows the same section of the proxy settings panel, with the fly over help text for this field.

    Figure 19. Proxy Plugin Configuration Policy
    Figure 19. Proxy Plugin Configuration Policy

The main point of this discussion is that in order to configure Web servers to route to proxy servers via the HTTP plug-in, you must configure the proxy server to generate the plug-in configuration file.

Additional considerations

Presently, the generated plugin-cfg.xml only contains the proxy servers that are up and running at the time of generation. If you have proxy servers configured but not running, they will not appear in the generated plugin-cfg.xml file. Therefore, you want to be sure you use a file created after all of your proxy servers have been started.

When using a proxy server for plug-in configuration file generation, the plugin-cfg.xml file is automatically regenerated on any change in the environment that could affect its contents. Examples of such changes include:

  • Start new proxy server.
  • Application install or uninstall.
  • Modification of virtual host definitions.
  • Modification of context root.

In the case of recovery or similar actions during plug-in routing file generation, you might see two versions of the routing file in the <profile home>/etc directory: plugin-cfg.xml, and plugin-cfg-<member name>.xml. The latter file is a transient file and typically should not be used for actual routing. If you find more than one version of the plugin-routing file, you can regenerate the plugin-routing file after deleting the plugin-cfg-<member name>.xml file.

Proxy cluster

If you require multiple proxy servers for high availability and for scalability, it is recommended that you create a proxy cluster. The proxy cluster was originally conceived for ease of configuration: create a proxy server containing all the necessary artifacts (like redirection rules, and so on) and then create additional proxy servers (cluster members) from that template. This saves you from having to reconfigure the same rules, and so on, in multiple proxy servers.


Conclusion

Part 1 of this two part article described the challenges you might face while scaling Java EE applications by deploying them in large WebSphere Application Server clusters. The general principle of the super cluster, a two-level hierarchical WebSphere Application Server cluster, was discussed, along with how certain classes of applications serving HTTP clients can be deployed in a super cluster for attaining extreme application scalability.

The principle of the super cluster depends heavily on the request forwarding capabilities of various routers involved in the topologies. Topologies involving the traditional Web Server Plug-in, the WebSphere Application Server classic proxy server, and a combination topology using both types of routers were presented as examples. In some cases, you have to manually alter the automatically generated static routing information. The technique of application deployment in multiple clusters was also explained.

The need for a super cluster does not occur in everyday practice, but should it happen for HTTP clients, it can be addressed. All super cluster options have some limitations, but the restrictions might be appropriate depending on the given scenario.

A super cluster might be considered when:

  • You presently need more than a 100 member cluster, or
  • Your deployment architecture needs to be flexible enough to accommodate more than 100 application servers in a future cluster.

Though there is no theoretical limit on the number WebSphere Application Server instances a super cluster can contain, super clustering should not be considered as a panacea for scalability. Resources used by specific Java EE applications can impose restrictions on the scalability of those applications. For example, with a database related application, the maximum number of database connections that can be served by the underlying database server can limit the number of WebSphere Application Server instances in which the application can be simultaneously deployed, and, hence, the scalability of the application.

Part 2 will continue to explore the super cluster technique by bringing the DMZ Secure Proxy Server for WebSphere Application Server, the WebSphere Virtual Enterprise on demand router, and WebSphere eXtreme Scale product into the discussion.


Acknowledgements

The authors are grateful to the following persons who contributed toward either the preparation or review of this article for its technical contents and accuracy: Bob Westland, the WebSphere Application Server Network Deployment workload manager architect; Peter Van Sickel, IBM Software Services for WebSphere consultant; Benjamin Parees, WebSphere Virtual Enterprise development; Keith Smith, on demand router architect; and Utpal Dave, IBM Software Services for WebSphere consultant.

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into WebSphere on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=WebSphere
ArticleID=398747
ArticleTitle=Super cluster to the rescue, Part 1: Techniques to achieve extreme application scalability
publish-date=06242009