Just the FAQs
In general, I get a lot of questions from various customers about HTTP session persistence. With Web 2.0 technologies taking off, more and more Web applications are being redesigned, and as they get changed, these same questions come up even more frequently. If you are involved in redesigning any Web applications for Web 2.0, here are answers to some of the questions you might soon be asking.
- If I don’t need session persistence, can I turn it off?
Yes, you can configure session management to run with in-memory sessions, which is actually the default. You can still use HttpSessions in this mode, but if a failure occurs, the data stored in the HttpSessions will be lost.
- What are the session persistence options with WebSphere® Application
Server Network Deployment, and what are the advantages and disadvantages of
The two main options for HttpSession are database persistence and memory to memory replication. SipSession replication only can use the memory to memory replication. In IBM® WebSphere Application Server Network Deployment, memory to memory replication uses what is called the data replication service (DRS). Additionally, the ObjectGrid (OG) feature of IBM WebSphere eXtreme Scale also offers memory to memory replication for HttpSession.
Database persistence is the most widely-used option. The performance of database persistence is also better than memory to memory replication, when not taking the extra hardware that the database is running on into account. Another advantage is that this solution can handle cascading failures of application servers, which is only possible with more than one replica in a memory to memory configuration. The disadvantage of database persistence is the cost of the database, particularly if the session data stored in the database should itself be highly available.
The DRS memory to memory solution is a good solution for many deployments. DRS is "best effort" memory to memory replication, which performs slightly worse than database persistence, depending on data size and number of sessions. The primary advantage of DRS is that you avoid the cost of the database. The disadvantage of DRS is that it is less reliable than the other options. Although we say that DRS is "best effort" and sessions might be lost in the event of a failure, the chances of session loss are minimal in a properly tuned system.
The third option is the OG memory to memory solution. The advantage of this solution is that it provides everything from asynchronous replication that is not best effort to guaranteed synchronous transactional replication. It is also less expensive than most database solutions. The primary disadvantage would be the added cost above WebSphere Application Server Network Deployment, even though this is less than most database license costs.
- If I need to come up with a one-size-fits-all general architecture, which do
In my opinion, I would start with database persistence and investigate OG memory to memory persistence to reduce database costs. Database persistence is the most commonly used architecture and it performs the best.
- Under what conditions can data be lost?
The most common scenario for session loss is when the write frequency is set to something like time-based writes. Any changes in the session between the last write and a server failure can be lost. This applies to any session replication options.
As I pointed out above, DRS has only a slight chance of losing a session because it is best effort. We say that DRS is best effort because it has no acknowledgements to ensure the data is received by the backup system, and in a system that has congestion issues, a session might not get replicated at all. When the high availability management system in WebSphere Application Server gets very busy, a failure to send the session might occur. While DRS will attempt to retry the send of the session, it might eventually fail if the congestion still exists with each retry. The number of retries and the retry interval is configurable. In the end, you might lose a session if a heavy workload causes these congestion failures for eac h initial try and for the retries, and if the server with that session fails. In this case, the session will be lost and the application will need to handle such a scenario.
The important thing to understand about DRS is that under non-failure conditions, the session will always be there. When a failure does occur, data that has not been written out before the failure occurred might be lost, regardless of whether the server is using database or DRS persistence. If this is a problem for a specific application, you might consider using something like the ObjectGrid cache instead of the session to store those objects. ObjectGrid provides transactional semantics around the object to ensure that if a failure occurs, you know when the change has been committed so that, if the error occurred before the change is committed, you can roll back.
- Will a loss of session affect any of the other WebSphere Application Server
Some parts of the application server itself depend on session to store state like JavaServer™ Faces (JSF) widgets. When a session is lost, however, those pieces are able to recover with some notable problems. For example, if a tree JSF widget was expanded to a certain section and a failure occurs, the next refreshed view of that tree might no longer be expanded.
- What are other recommendations for using sessions that will be persisted?
- Keep the session state small, preferably less than 4K overall.
- Use transient variables when possible, particularly when caching items in the session. A transient variable enables the application to keep a copy in local memory, but rebuilds the object if using the backup copy.
- Consider using an object cache like WebSphere eXtreme Scale rather than using the session for caching. While sessions might be convenient, the APIs are not purposed as a cache and do not meet many of the needs you could encounter with an object cache.
- Should I use a single row or multi-row schema with database persistence?
A single row schema provides fewer database queries overall and pushes more data to the database with each update. However, a multi-row schema can be more efficient -- or even necessary -- when there are large session attributes or very few changes to the attributes. Larger amounts of data can be stored using these multi-row schemas since each attribute is stored in its own row in the database. However, the resulting performance can be worse with a multi-row schema, since gathering attributes out of the session might cause multiple queries.
- What tuning parameters are available for HttpSession in WebSphere Application
Server Network Deployment?
The major session tuning options are outlined in the WebSphere Application Server Information Center. For HttpSession, a good place to begin tuning is the write frequency. Basically, you can configure how often the session manager writes to the database or to the peer server’s memory. The best performing options will perform the writes less frequently via the time-based options. The worst performing options are at the end of servlet service method (when the servlet returns from whatever method it was called through) or the manual update (where the servlet itself calls a method on the IBMSession object to cause the write to happen). The manual update can be the most efficient means of session persistence if the attributes are infrequently updated. However, they do introduce the potential for the application failing to call the sync method on the IBMSession after having updated the attribute.
The next major piece to change would be the "write contents," or the options as to what the session manager will write out. Those options consist of whether to write all the session attributes out to the persistence mechanism, or to write just the updated attributes. Writing all of the attributes out each time should really never be done. Aside from the performance not being as good as writing out just the updated attributes, it also can hurt portability to other platforms. Some servers do not support anything but the basic requirement, which is to persist the attribute when a setAttribute method is called.
With these answers, you should be able to decide on a session persistence strategy that best suits the requirements from both the business and application perspective. It is generally a good idea to simulate a failover under load in your test environment so you can see how not only the Application Server, but the application itself, will react to the session moving from one server to another. This is the best way to account for basic issues, such as forgetting to make one of the attributes stored in the session serializable, which could cause larger issues in a production environment.
The author extends special thanks to Bobby Goff and Rohit Kelapure for their comments and input when reviewing this article.
- Session management tuning
- WebSphere exXtreme Scale product information
- Configuring for database session persistence
- Memory-to-memory replication
- Configuring write frequency
- Redbook: WebSphere Application Server Network Deployment V6: High Availability Solutions
- IBM developerWorks WebSphere Application Server zone