A previous article discussed the reasons why administrators and developers should be interested in IBM WebSphere Extended Deployment (XD), heralding its unprecedented value in monitoring, availability, system visualization, and partitioning. What is truly revolutionary about WebSphere XD, though, is the amazing set of new options it provides to network administrators when designing a topology that must meet the goals of all various constituencies that need to be kept happy. These include:
- Application developers and business users who want their applications to run at the top of their form, all the time, and to have service level agreements (SLAs) that are consistently and reliably fulfilled.
- Corporate accountants who want to make sure that hardware is only purchased when it is absolutely needed, and that the company gets the best value for its dollars (or yen, or euros, or yuan).
- Network administrators who need to successfully manage the topologies and applications that they've been given.
To understand how WebSphere XD can help a topology designer accomplish some of these ends, we will first have to review some of the features of a WebSphere Application Server Network Deployment (hereafter referred to as Network Deployment) topology before we can understanding how the new features in WebSphere XD are implemented, and why they can be so useful in meeting these goals.
Reviewing Network Deployment topologies
Let's begin by examining something that should be familiar to most WebSphere developers and topology designers: a high availability topology that takes advantage of the capabilities of Network Deployment.
Figure 1. Network Deployment topology
In Figure 1, you see that requests are initially directed to an IP sprayer, either the WebSphere Network Dispatcher or some other IP sprayer. The requests are then forwarded to the layer of Web servers, where the WebSphere plug-in evaluates each request and determines whether it should be forwarded on to the application server layer.
This topology has been pretty consistent ever since the introduction of WebSphere Application Server V3.5 Advanced Edition, and it has formed the basis of the scalability and high availability story for WebSphere ever since. However, there are some limitations with how the plug-in works that we need to understand so that we can, in turn, understand why WebSphere XD does things differently.
The information used by the plug-in to determine how to route requests to application servers is static. That is, the mapping between request URLs and the set of application servers that can service those URLs is provided by an XML file (called plugin-cfg.xml in WebSphere Application Server V5.x and later) that is generated by the deployment manager and must be manually copied by the administrator to the Web server machines. When the URLs change, or when a new server is added, the plug-in file must be regenerated and replaced. (In WebSphere Application Server V6, the plugin-cfg.xml file can be pushed to the Web server, but this approach has a number of difficulties which frequently make this method inappropriate.)
There is no communication path back from the application server machines to the Web server plug-in. As a result, the plug-in does not know how busy a server is; often, the plug-in's first indication that a server is overloaded is when the server stops responding to requests.
The Web server plug-in is, by necessity, a small, portable piece of code. Since it must be compatible with all of the Web servers that Network Deployment supports, and since it also runs inside the DMZ (in an unsecured zone), the plug-in itself must remain small and relatively feature-poor to meet these needs.
As you can see, there are some consequences with the current arrangement. Changing the arrangement of a topology is a manual operation, since the plug-in file must be regenerated and replaced, and the plug-in is limited in the amount of information it can gather about the state of the application servers. Admittedly, these consequences are relatively minor if you view them in terms of obvious problems. However, it is better to think more expansively in terms of what powerful features we would like an intelligent load balancer to provide. This is one of the key innovations in WebSphere XD and is embodied in the On Demand Router.
Introducing the On Demand Router
The On Demand Router (ODR) is an intelligent, Java™-based routing engine that forms the core of WebSphere XD topologies. The ODR embodies many of the features of both the Network Deployment Web server plug-in and edge-of-network routers, like the WebSphere Edge Server Network Dispatcher. However, the ODR goes significantly beyond those features to support the on demand features of WebSphere XD.
Listens for requests that it proxies (currently only on HTTP, but other protocols such as RMI-IIOP and JFAP (WebSphere embedded messaging) are planned future extensions).
Classifies the requests according to the configuration of the WebSphere (or other) application servers for which the requests are destined.
Prioritizes the requests and uses its knowledge of the status of the application servers to determine which application server the request should be routed to.
Queues and issues the requests to the application servers that are servicing them according to the weight assigned to the requests.
Communicates with other parts of the XD system to coordinate its activities with other components and drive features, like dynamic placement.
A revised topology, including the ODR, could look something like Figure 2.
Figure 2. Revised WebSphere XD topology
There are a number of advantages that this topology holds over the standard Network Deployment topology:
Here, the routing information used by the ODR is dynamic; as applications are deployed (or undeployed), or as application servers are added to (or removed from) the topology, the ODR routing tables are automatically updated. This eliminates the need for a plugin-xml.cfg file, since the communication between the deployment manager and the ODR is direct.
Since the ODR is an active application server instance in its own right, it can use additional information from the application servers to make dynamic determinations about request weighting.
However, these advantages are just the beginning: making the routing dynamic and creating a communications path between the application servers (and deployment manager) and the ODR have opened up a number of other exciting possibilities that we will see in the next section.
As you look at Figure 2, you may wonder why we placed a proxy server in front of the ODR servers. The reason is to maintain the integrity of the DMZ security. There are three fundamental principles of a DMZ that we are applying here:
Inbound network traffic from outside must be terminated in the DMZ. A network transparent load balancer, such as Network Dispatcher, does not meet that requirement alone (recall that in Figure 1 we placed a Web server in the DMZ).
The type of traffic and number of open ports from the DMZ to the intranet must be limited.
Components running in the DMZ must be hardened and follow the principle of least function and low complexity.
The ODR is essentially a Java application server with added specialized functions. As a result, it includes a large amount of complex (and useful) code. Unfortunately, something this complex with so many open ports is not appropriate for a DMZ. Furthermore, for maximum benefit, the ODR must be able to communicate with the application servers and participate in the cell management infrastructure. That results in a lot of communication. If the ODR was in the DMZ, that would result in numerous openings in the inner firewall. For more on the steps to harden a WebSphere Application Server infrastructure refer to WebSphere Application Server V5 advanced security and system hardening.
For these reasons, we must place a simpler component in the DMZ that terminates network connections. We chose a proxy server in Figure 2, but other components are also appropriate, such as an SSL accelerator that terminates inbound SSL, an authenticating proxy server (like Tivoli® Access Manager WebSEAL), or even a simple Web server that forwards requests to the ODR "application server" using a rather simple plugin-cfg.xml file.
One moderately unfortunate implication of this topology is that we have added another hop to each user request. That is, the requests must now pass through two components (the proxy and ODR) before reaching the application server, which does increase latency. However, for many environments with high demand requirements, this price is worth paying because ODR improves load balancing, plus provides a multitude of other useful features. In the following sections, we will discuss some of those features.
Introducing service policies and routing rules
A basic feature of WebSphere XD that underlies many of its more advanced features is the concept of a service policy. A service policy is a named entity that defines a performance goal and a priority level, and relates to a set of transaction classes. You apply that goal to application URLs by connecting the transaction class to a work class (that maps to a set of URLs) through the creation of a routing rule.
That may sound a little complicated; essentially, this enables the creation of a flexible arrangement for determining which URLs should be held to which performance goals, and under which circumstances.
These concepts are key to understanding how the WebSphere XD ODR operates. Once an administrator has classified an application's URLs and connected them to service policies, then the ODR will begin examining incoming URLs and compare their performance characteristics to the goals defined in the corresponding service policies. The ODR can accomplish this through several mechanisms:
The ODR can use its knowledge of the relative priorities of different service policies to determine how long requests will wait in its own internal queues; higher-priority requests will receive service before lower-priority requests when resources are constrained.
The ODR can use its knowledge of response time goals and the relative availability of resources (CPU and memory) in different cluster members to perform dynamic workload management.
The ODR can work with other parts of the system to achieve dynamic placement within a dynamic cluster. Dynamic placement is a facility in which the XD autonomic managers determine how many copies of an application server are appropriate within a cluster. For example, if one application is performing well above its performance goals, while another higher- or equal-priority application is underperforming because the machines running clustered copies of that application are overtaxed, then the autonomic manager can decide to adjust the number of application servers running each application, decreasing the former and increasing the latter to improve the performance of the overwhelmed application.
Some of these facilities (such as the classification and queuing of work) are entirely the function of the ODR, and will work well with many different types of workloads. Others (such as dynamic placement) are more specialized and work in concert with other parts of the XD system. All require require the use of the ODR, which is the fundamental architectural linchpin of XD, and possesses still more features.
Caching and other advanced features
Before we look at caching in XD, let's again take a broad look first at the caching opportunities by tier in a typical WebSphere Application Server Network Deployment V6 architecture. Figure 3 shows the major components of such a topology, with caching areas at each tier, or "zone," which we will consider to be a logical collection of servers that share similar attributes and perform similar functions. (Of course, the Network Deployment application server cache is much more sophisticated than this diagram shows; for example, the cache can be distributed to multiple application servers in a cluster.)
Figure 3. Caching options
The caching available in a Network Deployment environment consists of:
ESI edge caching of servlet/JSP pages at Edge devices (such as a proxy server) and at the Web server plug-in.
Servlet/JSP fragment caching at the WebSphere Application Server Web container.
Web services SOAP request/response caching.
API-based caching in the application server, namely the Object/DistributedMap cache which caches objects and returns them by reference, and the command cache that caches the results of command pattern executions and returns copies of them.
The ODR in WebSphere XD V6 is derived from the WebSphere Application Server Network Deployment V6.02 proxy server, and inherits its caching capabilities as well, so we let's look first at some of the new capabilities introduced by this new proxy server.
The Network Deployment V6.02 proxy server (and thus the ODR) has the ability to cache static and (in some cases) dynamic content by caching responses according to the RFC 2616 (section 13) rules, which rely on the presence (and applicability) of the HTTP 1.1 caching headers. By default, WebSphere Application Server V6 automatically sets the appropriate HTTP 1.1 headers for content caching of static content. On each response containing static content, WebSphere Application Server will set the appropriate Cache-Control and Expires headers, then calculate the date and time at which the content will be stale by subtracting the last modified date of the static content on the WAR file from the current date and time; the result is then multiplied by a factor called the last-modified factor (which is by default set to 0.1). Upon the next request for that particular URL, the content cache will use this calculated value to determine if it can return the cached value of the response, or if it should return to the application server for a fresher version of the response.
In many cases, this is a good assumption, since content that has not recently changed on the application server can be assumed to change slowly. This can cause a problem in one particular case, though, if a WAR file is newly rebuilt and deployed on a server; the last-modified date of all the static content in the WAR file will reflect the date that the WAR file was rebuilt. The cache will refresh itself more often right after an application is deployed or redeployed than it will after it has been deployed for some period of time. To rectify this, you can, at any time, create a static cache rule that enables you to match a particular URL pattern with a new (and probably larger) last-modified factor. You can also use a static cache rule to disable the caching of static content that matches a particular URL pattern; for example, if that content were sensitive or private.
The proxy caching capability is built on the object cache infrastructure of WebSphere Application Server V6. One thing this provides is the ability to use a distributed cache to share information between ODR instances. By default, each proxy server (or ODR) is set up with an independent object cache instance. To connect the instances together, you must first create a replication domain for the object cache, and then set the object cache instance in each ODR that will be replicated in that replication domain to refer to the new domain. This ensures that as information is updated in the cache, the changes will be propagated to the other members of the replication domain.
The WebSphere Application Server V6 proxy cache server also has some capabilities for Edge Side Includes (ESI), in particular, enabling page caching of assembled pages and processing of ESI rules to determine how variants of pages (with different ESI-included content) can be cached. We will revisit this topic a little later; if you want to take advantage of ESI caching, this will have a bearing on the arrangement of the topology of your network.
Finally, Network Deployment V6, adds the new powerful and useful API-based Object/DistributedMap cache for applications that need the ability to cache information within the application server. This capability was formerly available only in Enterprise editions of WebSphere Application Server V5, but is included in all V6 editions.
All of these cache technologies are available in WebSphere XD V6, with even more additional options, such as the ObjectGrid and WebSphere Partitioning Framework technologies.
WebSphere XD ObjectGrid
ObjectGrid is an extensible, transactional object caching framework for quick and easy data sharing that improves application scalability and performance. It can be used with Java 2 Platform, Standard Edition (J2SE) 1.4.2 and higher, and with J2EE 1.4 applications, to retrieve, store, delete, update, and invalidate objects in its cached grid of objects, either by using application APIs or XML-based facilities. This is similar to the DistributedMap cache mentioned above in Network Deployment V6, in that it is built on the Java Map interface, but is more powerful than the earlier technology with the addition of sophisticated features like configurable cache eviction strategies and preloading, which are not available for the DistributedMap interface. Since it is an API-based cache, using the ObjectGrid requires changes to your program to specifically use the cache API, but the changes are minimal in those cases where your program already uses a Java Map-based API.
WebSphere Partitioning Framework
While ObjectGrid provides the ability to cache key and value pairs in transactional fashion, the WebSphere Partitioning Framework (WPF) provides context-based routing according to object characteristics. The WPF enables the partitioning of applications and data, improving database as well as in-memory caching and workload management, dramatically decreasing contention for shared data sources.
The WPF lets you to divide your caches logically across multiple JVMs so that you are not limited by the size of a single heap. A cache can be highly available so that when information is lost, it is quickly recovered. The cache can also have an innate "grouping" concept so that you can optimally organize the entries in the cache and keep related things together.
To maintain ObjectGrid consistency and integrity, you can use the partitioning facility to spread a large ObjectGrid out into many partitioned ObjectGrids; the partitioning facility's context-based routing directs requests according to ObjectGrid keys. For example, you need ObjectGrid to handle a large number of objects that cannot fit into a single JVM heap. You can use the partitioning facility feature to load data into different servers, and the partitioning facility context-based routing finds the right ObjectGrid for each request.
Ramifications for network designers
As you can see, the capabilities of the ODR provide numerous options to a topology designer. In fact, there are so many new choices that it's probably best to think of how the ODR affects your topology design by splitting the problem of routing into smaller pieces. Next, we will take the general idea of zones, introduced earlier, and consider subdividing the trusted zone even further. There are four sub-zone types that the trusted zone of a particular WebSphere XD network might contain. They are:
Legacy WebSphere ND Zone: Network Deployment applications are deployed just as they always were. For applications running in this zone, no changes are necessary to the application servers or to the application deployment process.
Dynamic Operations Zone: WebSphere XD is deployed on the application servers themselves. This installation enables most of the new features of WebSphere XD (health monitoring and dynamic placement) to come into play.
Non-WebSphere Zone: This zone features perhaps the most exciting new feature. The ODR can intelligently route to not only HTTP servers (as shown in Figure 2) but to other application server types as well, including other J2EE servers and even .NET servers.
Request direction Zone: This zone contains the ODR(s) and any security proxies that are necessary to pre-process requests before they are directed to their final destinations.
Figure 4. Zone types
Let's consider the ramifications, advantages and considerations of dividing the trusted zone into these different sub-zones.
Request Routing Zone
For a moment, assume that you see the value of adding the ODR into your topology. Your first decision in this zone will be to determine how many ODRs you will need. In any production system, we recommend you have at least two ODRs so that there will not be a single point of failure. Beyond that, the number of ODRs needed will be determined by the amount of load you expect the network to experience. A rule of thumb is that you can begin with the number of HTTP servers that you currently have routing to your application servers. Currently, the performance of the ODR is within a few percentage points of the routing performance of the Web server plug-in in WebSphere Application Server V6, so starting from this point is a good approach. However, you will then need to consider running performance tests where you vary the number of ODRs to determine what the optimal number would be.
A second major decision is to decide what caching options you will enable on the ODRs. For instance, you must determine if you are going to cache in the ODR at all. Most customers should enable caching of static content, especially if the content will be served from within a WAR file on the application servers. In this case, the caching capabilities of the ODR that we examined earlier will provide you with a performance boost over using the file serving servlet alone without an upstream content cache.
Next, you need to decide whether you are going to replicate the cache. If your content is made up of a small number of particularly large pages that are infrequently updated, then it might not be worthwhile to replicate the cache since time will be wasted in transferring the very large pages between the caches. However, in cases where the content consists of a large number of somewhat small pages accessed very frequently, then cache replication would be a better way to populate the caches quickly and improve the overall response time for serving all static content.
Legacy WebSphere Zone
One of the key considerations in introducing this zone into your network design is that it gives you time to consider the advantages of the different capabilities of WebSphere XD without having to put them all into place at once. Placing the ODR in front of an existing Network Deployment network enables you to move application serves in a piecemeal fashion to WebSphere XD if you want; first you can move the application server to the latest version of Network Deployment, then move the application server to WebSphere XD while retaining static clustering from Network Deployment, and then finally transition to full XD capabilities as servers move out of this zone. However, even on back level servers with static clusters, you still get classification and queuing through service policies and routing rules, just as you do with non-WebSphere Application Server URIs by putting an ODR in front of your existing Network Deployment clusters.
Dynamic Operations Zone
The servers in this zone can participate in all the advantages that WebSphere XD provides. The key consideration here is that all of the application servers must not only be at the prerequisite version of Netowrk Deployment (V6.02), but they must also have WebSphere XD installed on them as well. This has licensing ramifications; since WebSphere XD has a per-application server license cost, you must ensure that you have an adequate number of XD licenses in this zone before you move applications here. Given that only WebSphere XD V6 application servers can participate in this zone, it is a best practice to migrate your applications to Network Deployment V6.02 prior to moving them into this zone. The benefits of taking these steps, however, are substantial. Not only can you define node groups (which are shared pools of nodes in which dynamic placement operates) in this zone, but you can also gain health policies and edition management in this zone.
Perhaps the most radical feature of WebSphere XD (when compared to Network Deployment) provided by the ODR is the ability to route to destinations that are not even WebSphere application servers. These other destinations could be as simple as an HTTP server (like we showed in Figure 2), but could just as easily be other J2EE application servers (like an Apache Geronimo server), or even an application server running Microsoft® .NET®. You define these destinations by defining a generic server cluster, which is a set of HTTP destinations that are not managed by WebSphere, but which can be routed to from the ODR.
In fact, the ODR even lets you to use service policies and gain classification and queuing with non-WebSphere destinations, just as you would with an application hosted on WebSphere. But even that is not all there is: if you install a piece of software known as the XD Remote Agent on each non-WebSphere machine, you can also gain weighted routing to members of the node group based on the capabilities and available CPU and memory resources on that server. The addition of this level of feedback enables the ODR to go beyond what other commercially available routers can do in routing traffic where it can best be handled.
A final consideration here is where you will put your HTTP servers in this new topology. If you use ESI and the page assembly is done by an edge server, like an Akamai server, then you can put the HTTP server in the Non-WebSphere Zone, as shown in Figure 2. However, if you need to use ESI and are not using an edge server for page assembly, then the page assembly capabilities of the WebSphere Web server plug-in may dictate that you put the HTTP server in front of the ODR. In this case, your topology would look somewhat like that in Figure 1, but instead of routing directly to the application servers, the plug-in would instead only route to the ODRs, which would then route to the application servers. This is shown in Figure 5.
Figure 5. XD topology with plug-in routing
If your topology does not take edge page assembly into account, then page assembly will be done by the application server (if you use <jesi> tags or the cachespec.xml ESI-enablement option), which may defeat the purpose of using the ESI tags. However, this is not the only reason to choose the topology variation shown in Figure 5. Other considerations, such as the use of third-party security plug-ins to the Web server that must execute prior to the processing of requests by the WebSphere plug-in may also lead you to choose this topology. In this case, you will lose the ability to cache static content from the Web server in the ODR, but you will still be able to cache static content served from the application server in this arrangement.
In this article we have begun the journey of exploring the ramifications the new capabilities of WebSphere XD can have on your network topology. There are many more issues we will need to explore to make our journey complete (such as application design for ObjectGrid and WebSphere Partitioning Framework) -- but that will have to wait for another article.
- WebSphere Application Server V5 advanced security and system hardening, IBM WebSphere Developer Technical Journal, June 2004
- Comment lines from Kyle Brown: Why you need WebSphere Extended Deployment, IBM WebSphere Developer Technical Journal, April 2004
- Download WebSphere Application Server trial version
Dig deeper into WebSphere on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Experiment with new directions in software development.
Software development in the cloud. Register today to create a project.
Evaluate IBM software and solutions, and transform challenges into opportunities.