HTTP session management is a function offered by Java Enterprise Edition (JEE) application servers. It allows JEE applications to store state information about a given user across many HTTP requests. The JEE specification provides an API to store and retrieve information specific to the given user. WebSphere Application Servers and other competing products provide session-replication functionality, normally either by writing to a database or through memory-to-memory replication technology.
These approaches, while extensively used today, have operational challenges and important cost implications. HTTP session replication enables seamless session recovery in the event of an application failure, but currently, no reliable, predictable, and cost-effective solution exists to handle session recovery in the event of a data-center failure. Until now, session replication across cells and data centers has been discouraged, primarily due to cost and performance impacts. With the introduction of WebSphere eXtreme Scale’s zone technology, you can now recover sessions in the event of a data-center failure.
This paper discusses a common usage scenario and demonstrates the implementation of WebSphere eXtreme Scale (hereafter referred to as eXtreme Scale) as a separate in-memory data grid to store HTTP sessions. It also delineates past and existing technology in this context, and introduces eXtreme Scale as a technology that differentiates itself, namely by addressing scalability challenges in a cost-effective manner. Also addressed is the simplification of the Finally, we show how eXtreme Scale simplifies the technical implementation of a grid, making it a compelling yet simple solution to store HTTP sessions.
But first, we need to define some session-related concepts:
- Session management: The process of keeping track of a user's activity across sessions of interaction.
- Session replication: The process of replicating or copying the session data or object, either to another process (in memory, see Figure 1) or to a database (Figure2).
- Session storage: Persistence of (session) data to an external store (or in-process memory) to provide an efficient way to share session-state information across multiple machines running the same application.
Figure 1: Memory-to-memory session replication
Figure 2: Database session replication
Challenges of storing HTTP sessions
Many enterprise applications today require HTTP session persistence. This requirement stems primarily from end-user expectations, regulatory constraints, service level agreement (SLA) commitments, or a combination thereof. Session persistence provides the application with features such as handling session-state information, high availability of end-user sessions and application performance enhancements. The WebSphere Application Server application infrastructure ensures efficient session persistence management and configuration. For instance, WebSphere Application Server provides two distinct mechanisms to persist sessions: memory-to-memory replication; and storing session objects in a shared database. While these two mechanisms provide a persistence mechanism, the validity of session state across clustered application servers must be ensured.
Best practices to ensure validity of session state include:
- Session affinity: A mechanism to pin a user session to a particular server.
- Session invalidation: A mechanism that must be performed across all session-persistent storage entities (JVMs or databases).
While these options exist to store HTTP session and state beyond the life of an application server or JVM (which facilitates high availability) these approaches have challenges of their own. The growth and changing dynamic of Web applications have forced application infrastructure to rethink session-persistence strategies. Some of these growing challenges include:
- Use of runtime real-estate; that is, an application server JVM heap to store session provides an inefficient use of resources.
- Higher administrative and maintenance costs due to increase in user base, and subsequent linear growth in application hosting infrastructure.
- Performance considerations for storing, replicating, and managing the state of session objects.
- Performance considerations due to large session objects in some specific use cases.
The challenges for session storage revolve around performance costs due to serialization of the session object to a database, or to other application servers or JVMs across the network. This performance cost, coupled with the management of session state such as replication, updates and invalidation, add significant performance bottlenecks to the application-hosting platforms.
Why use eXtreme Scale?
WebSphere eXtreme Scale addresses these performance considerations by introducing a new technological platform for improvised management and replication, thus addressing the challenges imposed by conventional HTTP session persistent mechanisms. Note that while eXtreme Scale also employs memory-to-memory replication of HTTP sessions, it significantly differs from the mechanism employed by WebSphere Network Deployment alone. eXtreme Scale provides features such as quality of service, replication reliability and linear scalability. Furthermore, eXtreme Scale distinguishes itself from traditional approaches in these ways:
- Enabling a grid of JVMs with the sole purpose of storing HTTP session (or any Java) objects.
- Isolating the application runtime from the grid runtime, thereby freeing up the JVM heap for application use.
- Allowing linear scalability to accommodate growth, in sessions or the size of session objects.
- Providing implicit replication and management of session objects in the grid.
- Using zone-support to enable the storage of session objects across geographically disperse data centers (and cells), overcoming the traditional limitation of single-cell replication.
- Improving quality of service and replication reliability when compared with WebSphere ND, which does not guarantee conistent replication. Conversely, eXtreme Scale offers a reliable replication mechanism.
Figure 3 illustrates the application cluster separated or isolated from the HTTP session grid. Notice the session filter acts as an intercept prior to forwarding the request to the JSP or servlet. The Filter check for the session is in the session grid.
Figure 3. Example eXtreme Scale-enabled grid for session persistence (grid JVMs isolated from application JVMs)
WebSphere eXtreme Scale provides non-invasive integration for HTTP session management, so that you need not change the application logic. The two progressive approaches include:
- Configuring a servlet filter to an existing Web application with an in-memory data grid as the back end.
- Optionally, co-locating the application with the in-memory data grid in a client-server context.
The next sections discuss these approaches in further detail.
HTTP servlet filter, a standard part of the servlet specification
As Figure 4 shows, the HTTP servlet filter intercepts every request prior to forwarding it to a servlet or JSP. The servlet filter wraps the HTTPServletRequest and HTTPServletResponse objects that the application developer uses to access the request’s session state. The eXtreme Scale HTTPSession object overrides the object normally provided by the default session manager. Therefore there is no data duplication between the two session managers, and the eXtreme Scale session manager overrides the WebSphere session manager.
Figure 4. Topology of eXtreme Scale co-located HTTP session persistence (grid and application are in the same JVM)
The filter ensures that the session data is synchronized with the grid. On each request, if HTTP session attributes have changed, they will be written back into the grid. The eXtreme Scale HTTP filter presents a choice of storing the sessions in the same JVMs as the servlet containers (co-located) or in a separate grid tier of remote JVMs. Each of these options has its own advantages and limitations, and should be selected with infrastructure design considerations in mind.
The synchronous interaction of the filter and the grid is also an important design consideration, because it ensures that any change to the HTTP session is immediately replicated to the grid. While this might be a desirable option in some cases, it does have a performance impact; eXtreme Scale allows a configuration (in the
splicer.properties file) for this interaction to be asynchronous, where the changes are buffered and flushed to the grid at defined intervals. Note that this synchronous and asynchronous replication between the filter and the grid, while similar in concept, is entirely different from the replication mechanism configured between the grid servers.
Follow these steps to use an HTTP servlet filter with eXtreme Scale:
Create the following files and package them in a WAR module in the META-INF directory.
- objectGrid.xml: contains the definition of the grid itself, including the maps, locking behavior, plug-ins, and other specifications.
- objectGridDeployment.xml: contains a description of the grid’s deployment, such as how many partitions and zones, replication strategy, and other options.
- splicer.properties: provides the values used by
addObjectGridFilter, and is one of the required input parameters for the script that splices the Web module with filter declarations. This file contains servlet context initialization parameters such as catalog server host and port, affinity, persistence mechanism, replication type and interval, and other context-related settings. This file is not a required artifact in the WAR module, but is for information only.
- Run the
addObjectGridFilterscript on the WAR to add the filter to the
addObjectGridFilterscript is usually located at:
<eXtreme Scale_HOME>/session/bin/addObjectGridFilter.bat | .sh
addSessionObjectGridFilter <location of WAR/EAR file> <location of properties file>
Example 1. Running the filter script
addSessionObjectGridFilter MyWebModule.war splicer.properties
- When using eXtreme Scale in a WebSphere Network Deployment (ND) managed environment, you would need to augment the managed nodes with eXtreme Scale. Then you would perform Steps 1 and 2 above, deploying the WAR to an eXtreme Scale augmented cluster.
(Optional): Co-locating the application with the in-memory data grid in a client-server context
Using eXtreme Scale client-server topology for embedded applications is a deployment variation of the above techniques, in which the application is the client to the eXtreme Scale grid. The natural progression after adding an eXtreme Scale HTTP session filter is to decide upon the client-server topology for the Web application that is now embedded within the eXtreme Scale HTTP session filter. There are, in effect, two choices: either co-locate the application with the grid, or keep them isolated from each other. The choice of topology depends upon the application design objectives. Although this section explores the deployment topology within the context of the eXtreme Scale HTTP session filter, this discussion applies to any Web applications that employ session-management capabilities.
Co-locating the application with the grid (in the same JVM runtime)
This option means that the eXtreme Scale servers are co-located in the same processes where the servlets run. The eXtreme Scale session manager can communicate directly with the local
ObjectGrid instance, since it is co-located within the same server process.
When using WebSphere Application Server as a runtime and grid container, simply place the supplied
l files into your WAR files’
META-INF directories. At that point, when the application starts, eXtreme Scale will automatically detect the presence of these files and launch the containers in the same process as the session manager. You can modify the
objectGridDeployment.xml to configure which type of replication (synchronous or asynchronous) and how many replicas you want. This step assumes that you have augmented the application server profile or instance with eXtreme Scale.
Isolating the application from the grid (application and grid in separate JVMs)
This technique is beneficial for applications with either voluminous HTTP session traffic or relatively larger HTTP session objects. In this case the eXtreme Scale session manager, which resides on the application server process, communicates to remote eXtreme Scale server processes. In effect, the eXtreme Scale session manager becomes the client to the eXtreme Scale servers, which are the hosts to grid containers.
To use a remote
ObjectGrid, you need to configure the eXtreme Scale session manager to communicate with the catalog servers and grid servers. The session manager then uses a client connection to communicate with the catalog server and the container servers, wherever they may reside. If you want to start the
ObjectGrid servers in independent, stand-alone J2SE processes, then you should launch the
ObjectGrid containers using the
files supplied in the session manager samples directory. This differentiation indicates that the HTTP session store is standalone and isolated. However, the core functionality of the file remains the same, such as how
objectGridStandalone.xml define the grid for sessions, and how
objectGridDeploymentStandalone.xml defines the mechanics of grid deployment.
The eXtreme Scale zone-based replication allows for rules-based data placement, enabling high availability of the grid due to redundant placement of data across physical locations. This implementation is particularly appealing to enterprise environments that need data replication and availability across geographically dispersed data centers. WebSphere eXtreme Scale introduced zone-based support in V184.108.40.206. Zone support provides much needed control of shard placement in the grid, and allows for rules-based shard placement.
In the past, these enterprise computing environments were limited, due to the performance constraints imposed by networks and real-time data replication requirements. With the inclusion of zone support, WebSphere eXtreme Scale offers better scalability by decoupling the grid environment. With zone support, only the metadata shared by the catalog servers is synchronously replicated, while the data objects are copied asynchronously across networks. This technique not only enables better location awareness and access of objects, but also imposes fewer burdens on enterprise networks, by eliminating the requirement of real-time replication.
As long as catalog servers see zones being registered (as the zoned grid-servers come
alive), the primary and replica shards are striped across these zones. Additionally, the zone rules described in the
objectGridDeployment.xml file dictate the placement of sync or async replica shards in respective zones. Figure 5 shows Two geographically dispersed zones to store HTTP sessions across two data centers. Note that the catalog servers are hosted in separate JVMs from the grid containers.
Figure 5. Geographically dispersed zones to store HTTP sessions across data centers
As a general practice, we recommend that you place only sync replicas in the same zone, and async replicas in a different zone for optimal replication performance. This placement would also be optimal for scaling across geographies or data centers.
Since core groups do not span zones, place the catalog servers one or two per data
center or zone, so that the catalog servers synchronize their object and shard routing
information. A catalog service (one or more catalog servers) must be clustered for high availability in every zone. The catalog service retains topology information for all of the containers in the
ObjectGrid and controls balancing and routing for all clients. Since the individual catalog servers maintain the catalog service and client routing, you need to understand the concept of a catalog service quorum. A catalog service quorum is the minimum number of active catalog server members required for the grid system to operate correctly (to accept membership registrations and changes to membership to ensure proper routing and shard placement).
Example 2. Zone meta-data from ObjectGridDeployment.xml file
<zoneMetadata> <shardMapping shard="P" zoneRuleRef="stripeZone"/> <shardMapping shard="S" zoneRuleRef="stripeZone"/> <zoneRule name ="stripeZone" exclusivePlacement="true" > <zone name="ZoneA" /> <zone name="ZoneB" /> <zone name="ZoneC" /> </zoneRule> </zoneMetadata>
You can achieve this approach of ensuring the registration and consistency of grid servers only when a quorum is established between catalog servers. Writes to the catalog service state are committed only when the majority of the catalog servers participate in the transaction. Containers that are changing states cannot receive any commands, unless the catalog service transaction commits first. If the catalog service is hosted in a minority partition, meaning that no quorum has been established, it then accepts “liveness” messages. The catalog servers cannot, however, accept server registrations or membership changes, because the state is essentially frozen, until the catalog-service quorum is re-established.
One of the main design concerns of an application and application infrastructure is the session persistence and handling of session state beyond the life of application servers or JVMs. Traditional persistence mechanisms impose performance and scalability concerns with growth in business and subsequent requirements. Growth in business usually results in increased traffic, user session and proportional storage requirements. Historically, IT organizations have attempted to battle this problem with innovative and complex solutions, such as employing various caching layers at the edge for static content and a separate cache for dynamic content, or by selectively preserving some HTTP session data and recreating the others.
Such strategies, while effective in some application environments, have reached the maturity level of their growth spectrum. Increasingly complex and defragmented application frameworks have compelled the “in-memory data-grid” paradigm to provide a scalable solution to address the growth of session storage, in terms of both size and volume. The introduction of an in-memory grid relieves the constant battle with scalability and provides an isolated layer whose sole purpose is to store HTTP session (or any Java) objects.
The eXtreme Scale technology enables this model, which is essentially a grid of interconnected JVMs acting as a single cohesive unit, much like a database, without the management and performance overheads. Using eXtreme Scale for HTTP session persistence is a non-invasive change to application architecture, as seen in Figure 3, where you introduce a servlet filter in the application. This filter intercepts requests and is connected to the grid without any significant change to the application architecture.
The eXtreme Scale-enabled grid is policy driven and self-managed, and can easily absorb growth by including additional JVMs to the existing grid. Thus, eXtreme Scale makes a compelling solution for HTTP session persistence in environments where performance and scalability challenges arise, due to growth in business application traffic.
With that in mind, eXtreme Scale not only addresses fundamental session-storage concerns, but also adds to the capability of session storage across geographically dispersed data centers, independent of the WebSphere infrastructure’s cell-based management boundaries. This capability alone empowers organizations with the ability to persist and handle session state across data centers. Such a capability was neither recommended nor technically feasible with existing technologies and solutions, but eXtreme Scale’s zone support is sets it apart from competing caching technologies.
I want to thank Joshua T. Dettinger and Billy Newport for their help in technical content and editing of this paper.
- Redbook: User's Guide to WebSphere eXtreme Scale
- Brown et al - Improving HTTP session performance with smart serialization
- WebSphere eXtreme Scale Wiki
- Session Management on Wikipedia
- WebSphere eXtreme Scale product information
- developerworks WebSphere Extended Deployment zone .
- developerworks WebSphere Application Server zone