IBM WebSphere Developer Technical Journal: Performance considerations for custom portal code

Performance is often a trade-off between functionality, maintainability, program execution time, and memory consumption. This document offers programming advice for maximizing the performance of portal application components, such as portlets, themes, and skins, with IBM® WebSphere® Portal.


Rainer Dzierzon (, WebSphere Portal Performance Team Lead, IBM

Rainer Dzierzon is team lead of the WebSphere Portal performance team located in the IBM Development Laboratory in Boeblingen, Germany. Since 1990, he has worked on numerous software projects for several companies in the area of database performance tools, text search, SyncML, online banking, and financial services architecture and standards. He and his team work as part of the WebSphere Portal development community to provide developers with the necessary insights of performance behaviour, and to consult customers and solve their performance issues. He holds a diploma in Computer Engineering from the University of Cooperative Education, Stuttgart.

Klaus Nossek (, WebSphere Portal Performance Analyst, IBM

Klaus Nossek is a performance analyst on the WebSphere Portal team at the IBM Development Laboratory in Boeblingen, Germany. He received a diploma in Computer Science from the University of Stuttgart in 1996. After working for several years on various software projects, he joined the IBM WebSphere Portal division in 2003. Currently, he belongs to the WebSphere Portal performance team. His main working areas are developing of performance scenarios for WebSphere Portal, and Java code profiling for detailed performance analysis.

Michael Menze (, WebSphere Portal Performance Analyst, IBM

Michael Menze joined IBM development in 2002 after studying with IBM for three years. He works in the Boeblingen development team for WebSphere Portal and is responsible for WebSphere Portal performance analysis, and the introduction of performance improvements.

17 August 2005

Also available in Chinese

From the IBM WebSphere Developer Technical Journal.


This article provides general guidance for creating well performing custom code for IBM WebSphere Portal. Custom code does not only refer to portlets (although they are the most common programming model for portals), but also includes code for WebSphere Portal themes and skins. Since these are implemented using the same basic technologies used for portlets, many performance considerations are applicable in the same way.

Regarding portlets, this article focuses on standardized portlets following the Java™ Portlet Specification, JSR 168, and the corresponding implementation in WebSphere Portal. The basis for this article is WebSphere Portal V5.1 or higher, although most guidelines and recommendations presented here will apply regardless of the version of WebSphere Portal you are running.

This article will explain how to set up and exploit the deployment parameters of a portlet application to optimize portal and portlet performance, since this is the last step in creating custom portal code. In contrast, general tuning of WebSphere Portal (that is, administrative actions that are performed after custom code has already been created and deployed) will not be covered here. Another document is available that covers WebSphere Portal performance tuning. That document combined with this one will provide an excellent resource on portals and performance.

This article is intended for programmers, designers, and architects involved in building custom portal applications and want to improve their understanding of potential performance issues with regard to custom code.

WebSphere Portal environment overview

IBM WebSphere Portal is built upon the IBM WebSphere Application Server product. As a result, the programming environment for custom portal code is threefold, with important corresponding implications:

  • WebSphere Portal and all its components are Java-based programs.

    Thus, in general, best practices with regard to programming high-performance Java code should be followed.

  • WebSphere Portal is a J2EE application running on top of an application server platform.

    J2EE embraces multi-threading; J2EE containers typically take a thread-per-request approach to handle the request burden. Any implementation or performance considerations inherent with using this mechanism should likewise be taken into account.

  • WebSphere Portal provides APIs to expand portal functionality.

    Many tasks can be programmed in many different ways. Differences that affect performance should be addressed as a priority.

The next sections introduce some general performance considerations for different parts of the portal programming environment.


Clearly, this section is not intended as a complete treatment of Java performance. Instead, we present here the items we have found most useful when dealing with WebSphere Portal development, and suggest some resources that can help provide an indepth understanding of Java performance (see Resources).

Basic Java performance

In this section, we will cover some general performance items that will apply to most Java programs. Although these recommendations may not yield order of magnitude performance improvements, they can make you aware of the importance of low-level program execution performance while in the development phase.

  • Use java.lang.StringBuffers instead of java.lang.String instances when the modification of strings is required.

    String objects in Java are immutable, whereas StringBuffer objects are mutable. Whenever text is to be appended to or deleted from a String, a new object is created under the covers and the old object is discarded. Hence we prefer this:

    StringBuffer sb = new StringBuffer("Hello ");
    sb.append(var).append(" World");

    over this string concatenation:

    String s = "Hello" + var + " World";

    You can sometimes further improve performance by setting the initial capacity of a StringBuffer; the class is designed to automatically grow when it can no longer hold all its data. However, there is a performance penalty here, since a StringBuffer must work to transparently increase its size and shift the data around. For example, if a StringBuffer is used as a collecting parameter (meaning that more and more data is added to it), you should compute the appropriate buffer size before instantiating it so that it never needs to grow in size.

  • Avoid costly I/O operations in server-side programs.

    Execution of at least this thread is blocked during the I/O operation; if other threads also must wait for the disk, then system response times will easily and rapidly increase. Unless logging is being performed (for example, of exceptions or site access information), WebSphere Portal does not cause any disk access on its own. We will discuss I/O more later.

  • Minimize the number and length of synchronized code blocks.

    The synchronized keyword lets only one thread enter a code block at a time. The longer a synchronized code block requires for execution, the longer other threads wait to enter that block. We will discuss synchronization more later.

  • Avoid expensive calculations and method calls.

    For example, retrieving the current time information using System.currentTimeMillis() is rather expensive. If you need time information in your code, verify whether you need the exact time in every case, or (for example) if accuracy to the nearest second would be sufficient. If you have many get-time calls in your code path, but millisecond-accuracy is not mandatory, an alternative could be to determine the time at the beginning of a request and simply reuse this information throughout the request.

  • Restrict your use of exceptions.

    In general, exceptions in Java should be used to indicate a faulty situation. Do not use exceptions to indicate the success of an operation, especially because creating the exception stack traces is a time-consuming effort for the JVM, and because the traces can be very deep in a WebSphere Portal system.

  • Take care when using the Java Reflection API.

    This API adds powerful options to dynamic code execution, but also imposes severe performance penalties in terms of method execution times in exchange for this flexibility. In general, try to avoid the use of the Java Reflection API in portal code. However, if it is necessary to have reflection calls, try to have them in init methods so that they are not executed during every request.

Memory consumption and garbage creation

While memory often is not a predominant performance issue for Java client software, it is a major concern for J2EE applications, mainly because enterprise applications are typically accessed by many users at the same time. For an application server to be efficient, the available resources, including memory, CPU and bandwidth, are shared among the clients' requests. There are three major memory issues we want to mention:

  • Keep the amount of temporary objects to a minimum.

    That means, try to reuse objects as often as possible, and do not create new object instances too often. The more objects that are created, the more frequently the JVM garbage collector has to reclaim memory and, at least partially, interrupt request handling at that time. Creating many objects also tends to increase heap fragmentation, which leads to even more garbage collection cycles. For example, do not create objects prematurely:

    String logData = "Parameter 1: " + param1;
    if (logger.isLogging(DEBUG)) {

    In this example, logData should only be created after evaluating the condition. Caching and pooling can be helpful techniques to reduce temporary object creation. To identify the parts of your code that contribute most to memory allocation issues, see Tools.

  • Keep your permanent memory footprint low.

    Do not read too much information into memory; rather, use caches to hold important information. Sometimes it is possible to change the data type for a piece of information. Date information, for example, can be held within a java.util.Date object or in a long variable. Objects typically are larger and somewhat slower to process than primitive data types. It might depend on the surrounding APIs and data structures which data type is preferred. In general, a higher memory footprint leads to higher garbage collection rates and additional pause times during request processing.

  • Check your application for memory leaks.

    When leaks occur, they typically occur within Java collection classes. For example, you have a java.util.Map and, under certain conditions, data is added to the map but never removed from it. Memory leaks lead to more and more consumption of memory that is reserved by the Java heap, and the garbage collector will be able to free up less memory over time. Thus, garbage collection will occur more frequently, and, finally, the portal system will become unresponsive. To complicate matters, memory leaks are often uncovered by long-running tests only, but there are tools around that can assist you with this analysis (see Tools).

Code design for performance and scalability

There is much to remember when designing and developing code for scalability. Three things of particular importance are: Caching, pooling, and information pre-fetch:

  • Caches store already computed results.

    For example, you can retrieve information from a backend system, but rather than copy every possible object from the store to memory, just load small pieces and put them into a cache. This way, the information is available for later reference, possibly in another request, or even for another user.

    Caches almost always take the form of object maps with an upper size limit. A cache also has to have a way of knowing when something is unlikely to be asked for again so that it can be removed from the cache when appropriate. Such evictions are typically determined by a "time-to-live" (TTL) or "least-recently-used" (LRU) algorithm. Furthermore, the client using a cache cannot be confident that it will successfully retrieve an object from the cache; it must check for its existence, and then create the object if it is not found:

    Mail mail = myCache.get("myMail");
    if (mail == null) {
    	mail = readMailInformation();
    	myCache.put("myMail", mail) ;

    (In some cases application-specific caches can be designed in such a way that they lookup the required data from some data source that is transparent to the client.)

  • Object pools are used to restrict the number of instances of a certain class.

    Often, a request requires an instance of a certain class, but this object does not (and should not) need to be recreated in every request. This is especially true in cases where object creation and initialization are expensive. Rather than accepting the performance hit, clients can request objects from a pool and then return them to the pool after finishing their use case.

    PooledObject po = myPool.get();
    // use the PooledObject 
  • A simple form of pool is to canonicalize an object.

    This means that all different instances of an object are created during the program initialization phase and reused and referenced later on. The class java.lang.Boolean provides an example of an object that is canonicalized. There need to exist only two different Boolean objects, preferably accessible as constants. The same can be done with other objects with fixed sets of read-only internal state.

  • Do not fetch more data as you currently want to process.

    For example, in your portlet you could provide a list of e-mails; the portlet displays the subject, date, sender, and other important information. When the user selects a particular e-mail, the body of the e-mail displays. The body is not needed before the specific item was selected from the portlet, so retrieving it any earlier would be a waste of execution time and memory resources. This pattern applies to many situations. The general rule is to only compute and retrieve those pieces of information that have direct significance for the current request and response.


IBM WebSphere Application Server is the J2EE implementation upon which WebSphere Portal is built. Many of the performance considerations in the subsequent section apply due to the J2EE runtime context. The following sections describe performance items that are only applicable to IBM WebSphere Application Server. The items listed below are described here at a high level, and will be explained in more detail later in this article. For a more general discussion see Resources.

J2EE standard

The J2EE standard specification contains a number of items with performance implications:

  • init methods, available for many J2EE resources -- and also for portlets -- should be used to pre-calculate everything that will be used later on and that will not change. For example, JNDI lookups for common resources such as a data source should be performed only once at initialization time. Also, reading data from certain read-only files should be done just once during initialization of a portlet. You can scan your portlet service methods for any code that is executed in the same manner on every request and move it to the init method to reduce the run time cost for the service methods.

  • EJB components and sessions are important and powerful concepts within J2EE, but either of these can have severe performance implications if not used wisely. For example, applications should not put too much data into sessions to reduce the memory footprint of the server and to make session persistence easier and faster. Regarding EJB components, you should become familiar with the different persistence types associated with remote and local invocation, and so on. Some features available for EJBs come with a high performance penalty.

WebSphere Application Server

WebSphere Application Server products provide features to assist developers and architects in designing high-performance systems. (See the Information Centers for WebSphere Application Server and WebSphere Business Integration Server Foundation in Resources.

  • As mentioned earlier, database connections are expensive to create. As defined in the J2EE standard, application servers can provide a pooling facility so the connections do not need to be recreated with every incoming request. WebSphere Application Server provides such a pooling facility together with some additional performance helpers, like a statement cache for frequently executed SQL statements. However, failing to return the connection immediately after completing the database interaction results in making the connection unavailable to other requests for significant periods of time. Using the WebSphere Application Server administrative console, you can control connection pools as properties of data sources to a JDBC database, and can define, for example, a minimum and maximum number of connections for the pool. (See the WebSphere Application Server Information Center for more information.)

    For example, database connections are expensive to create. In this case, it is possible to make use of JDBC connection pooling and leverage the prepared statement cache as provided by WebSphere Application Server:

    . . . 
    public class IDontCare extends GenericPortlet {
       private javax.sql.DataSource ds;
       public void init() throws javax.portlet.PortletException {
          try {
             Hashtable env = new Hashtable();
             env.put( Context.INITIAL_CONTEXT_FACTORY,
                "" );
             Context ctx = new InitialContext( env );
             ds = (javax.sql.DataSource)ctx.lookup( "jdbc/MYSHOES" );
          } catch (Exception any) {
             // handle exceptions here
             . . . 
       . . . 
       public void processAction ( 
          ActionRequest request, 
          ActionResponse response
       ) throws PortletException, IOException {
          . . . 
          try {
             Connection con = null;
             ResultSet rs = null;
             PreparedStatement pStmt = null;
             con = ds.getConnection ( dbuser, dbpasswd );
             pStmt = con.prepareStatement(
                "select * from myscheme.size_of_shoes");
             rs = pStmt.executeQuery(); 
             . . . 
             // release the resource when it is no longer used
             if (pStmt != null) pStmt.close();
             if (con   != null) con.close();
          } catch (Exception any) {
             // handle exception here
             . . . 
  • WebSphere Application Server also supports the general concept of object pools, with each pool consisting of a pool manager providing access to pools for different class types. Such pools can be queried for an instance of the class type, as given in the previous example about pools. See the WebSphere Business Integrator Server Foundation Information Center for more information.

  • WebSphere Application Server also provides a "general purpose" cache. In the admin console, you can define cache instances that your applications can use to store, retrieve, and share data. Unlike the default shared dynamic cache, which is used by the portal to cache objects, a cache instance is only accessible to those applications that know its JNDI name. The DistributedMap interface is the programming interface that your applications work with, enabling the application to get and put objects into the cache instance, as well as invalidate them. See the WebSphere Business Integrator Server Foundation Information Center for more information.

    If portlets make use of a caching implementation, they should look up or instantiate a cache instance in their initialization phase and keep a reference to that cache so that cache entries have a lifetime that is potentially longer than a single request. While processing the action and the render phase of a portlet, entries can be put into and retrieved from a cache. The portlet implementation needs to make sure that there is proper backend access and cache update handling if the cache does not return the data when queried with a certain key. Also, be aware that scoping of keys into the cache (for example, per user session) might be required for proper function of the intended design. A cache is typically a self-managed unit that can evict or invalidate entries dependent on the cache implementation. Note that, for the same reason, a cache is not suitable to communicate information back and forth between several pieces of code. A cache should also maintain a reasonable upper size limit to avoid memory over-utilization in custom code.

Portal APIs

WebSphere Portal supports two different portlet APIs:

  • IBM Portlet API, which extends servlets.
  • JSR 168 Portlet API, which is defined by the Java Community Process (JCP).

In this article, we focus on the JSR 168 Portlet API.

WebSphere Portal provides a variety of interfaces for integrating your portlets into the WebSphere Portal environment. As such, portlets should be carefully designed to take advantage of portal features. Be sure to apply best practices (such as those listed in Resources) so that your application of WebSphere Portal APIs is appropriate.

Common implementation considerations

In this section, we look at performance topics that are relevant for theme and skin programming, as well as portlet development.


JavaServer Pages (JSPs) are one of the cornerstones of portlet programming. In most portlets, using Model View Controller (MVC) terminology, JSPs function as the view component. JSPs are composed of a mixtures of HTML (or other markup languages) and Java code; their processing output is also a markup language, in most cases HTML. In its purest form, JSPs do not contain any Java code, but only custom tags that are called to perform non-HTML operations. (Conversely, it is also possible to have virtually no HTML content in a JSP file.)

  • Upon the very first access to a JSP file, the file is parsed and translated into a regular Java servlet source file, which is then compiled into byte code. Hence, the very first request to a JSP is typically slow due to the two subsequent conversions (from JSP to Java source to byte code), but then it works like any other servlet for all future requests.

    This is different from another approach for generating HTML content: XML and XSLT. In this case, upon every request the XML has to be parsed and style sheet transformations have to be applied. Only a good approach of caching the results and not re-running the transformations upon every request can save the performance here. Hence, from a performance point of view, JSP should be preferred over XML/XSLT. Furthermore, the portal infrastructure is optimized around JSPs, enabling easy expansion into other markups, languages, and browser support.

  • Application servers execute JSPs similarly to how they execute regular servlets. Nonetheless, servlets resulting from JSP compilation contain generated code, which is, in general, less optimized for performance than handcrafted code. If performance is very important for a certain JSP and you cannot achieve your goals with generated code, consider writing the markup into the output stream by yourself.

  • Java code fragments in JSPs are called scriptlets. Since JSPs are converted to Java source code anyway, there is no real performance penalty associated with using scriptlets. Some optimizations in the latest version of WebSphere Application Server apply to JSP files in cases where a JSP file does not contain any scriptlets. In general, you should not put scriptlet code into your JSPs, and instead use tags for those tasks.

  • JSPs can include other JSPs. That means that a single JSP does not have to answer the complete request; you can split the response into multiple JSPs and include others from a parent JSP. There are two different forms of inclusion, static and dynamic:

    • Static JSP includes are resolved at compile time. The JSP compiler includes the referenced file instead of the include statement. This option is generally very fast and adds no run time overhead at all.

      <%@ include file="include2.jsp" %>
    • Dynamic JSP includes are resolved at run time, which is not an inexpensive undertaking. Resolving the correct JSP to which to dispatch is quite expensive in terms of garbage creation and execution time. For example (inside a JSP):

      <jsp:include page="include2.jsp" flush="true" %>

      Dynamic inclusion in JSPs is similar to using javax.servlet.RequestDispatcher when including other files from servlet code. Therefore, wherever possible, you should use static includes. Dynamic includes offer the highest flexibility, but come with a significant performance overhead if used too often.

EJB usage

Enterprise JavaBeans (EJB) define a component-based architecture for building scalable, transactional, distributed, and multi-user business applications. EJB components are designed to encapsulate the business logic while hiding all complexity behind the bean and built-in EJB container services.

The support for the variety of functions frequently required by enterprise applications does not come for free, as there is a certain amount of performance overhead that needs to be taken into account when using EJBs.

  • A portlet can obtain EJB references through JNDI lookups, which tend to be expensive with regard to performance. For example, if a portlet does not cache the reference to the EJB home interface somewhere, then every logical call to the EJB requires two remote calls: one to the naming service, and one to the actual object. To rectify this situation, use caching techniques to reduce or eliminate repeated lookups of EJB home references.

  • EJB components expose remote and local interfaces. EJBs that are location-independent use a remote interface. Method parameters and return values are serialized over RMI-IIOP and returned by value. Remote methods must be designed to satisfy data needs according to the usage pattern of the API. Use a granularity of methods and data types in the API that fits well with the use cases of the interface to minimize serialization costs.

  • Minimize the number of remote calls to reduce the overhead imposed by remote calls in your code path. Use session beans acting as remote facades to wrap complex interactions and to reduce remote calls between portlets and domain objects. A portlet accessing a remote entity bean directly usually results in several remote method calls. If you use entity beans in this context, avoid giving them a remote interface. Instead, session beans acting as facades can access entity beans via their local interfaces, gather data from them, and then return this information to the calling application.

    The concept of local interfaces works when the calling client (such as a session facade) shares the same container as the called EJB. The use of local interfaces reduces inter-process communication costs by eliminating the overhead of a distributed object protocol. Local calls do not go through the communication layer and any objects can be passed by reference.

  • Transaction management supported by the EJB container can also affect performance. After developing an EJB, the programmer must set deployment descriptors that define characteristics, such as transaction support and isolation levels for the EJB. Set the transaction type to NotSupported if no transaction is required.

  • The transaction isolation level is the degree to which the underlying database exposes changed but uncommitted data to other transactions. For the best performance, use a liberal isolation level. Letting uncommitted data be seen by other transactions, however, can result in unexpected side effects, such as clashing updates and inconsistent reads. For instructions how to set isolation levels see the WebSphere Application Server V5.1.x Information Center.

See the IBM white paper WebSphere Application Server Development Best Practices for Performance and Scalability and the IBM Redbook IBM WebSphere V5.1 Performance, Scalability, and High Availability WebSphere Handbook Series for additional recommendations, as well as the justification for each recommendation.

Markup size

Markup size refers to the number of bytes to be transferred from the portal server to the client for a completely rendered portal page. From a portal server point of view, the most important part is the size of the HTML page containing the resulting markup. Other files, like stylesheets, images, or JavaScript, must be transferred to the client as well. Since the static files are typically stored outside the portal system on HTTP servers or proxy caches, we will concentrate here on the "real" HTML markup size.

Why do we bother at all about markup size? Within company intranets, there are probably lesser issues with network bandwidth, but if users are connected via modem or other low-bandwidth network connections, lengthy download times for large HTML responses can be very annoying.

Here is a short calculation. Let's assume that a server or cluster is processing up to 100 requests per second. The average HTML page size should be 100 KB, which may seem like a lot, but can easily be reached if you have a complex theme and several portlets on the page. For the server, this means that it has to put about 10 MB/sec on the wire (100 KB * 100 pages/sec). This is about the maximum traffic a 100 MB network can handle. (You cannot expect an Ethernet to support 100% of its potential capacity of 100 MB/sec and the inbound traffic is typically not negligible. For users connected to the portal via 56K modem, the download time for one page would be in the area of 15 seconds!)

How large is too large? This is difficult to answer in general. However, more than 100 KB per HTML page is probably too much. Also, keep in mind that smaller devices have restrictions on the markup size they can handle per request.

The main contributors to markup size are the theme and the portlet output. Since all portal JSPs are customizable, you can influence how compact your markup is at the end. Here is what you can do to limit the size of your markup:

  • Use JSP comments instead of HTML comments inside your JSPs.

    Comments of the form <%-- ... --%> will be deleted by the JSP compiler, while comments of the form <!-- ... --> are kept and transported over the wire.

  • Try to reduce the amount of white space, tabs and line breaks inside your JSP source files since these are retained by the JSP compiler.

    This may reduce legibility of the code. It can be helpful to have code in development that is nicely laid out, but it is processed and stripped off its formatting with a tool before the JSP files are applied to your production environment.

  • Try to avoid sending the same information to the client several times.

    For example, style definitions should go into separate CSS files. The same is true for JavaScript code. Furthermore, these separate files, as they typically do not change, can be cached in browser or proxy caches, reducing network traffic even further.

  • If your environment setup for it, you can also send compressed markup using HTTP compression to the client.

    Refer to your Web server's and your clients' documentation for more information.

Logging, tracing, and I/O

Logs usually end up on the hard disk at some point. From a performance perspective, anything interacting frequently with the disk presents a potentially expensive operation, so it is best to minimize the use of the Java I/O library in the production environment. Since the I/O is usually provided through the use of some native library layered beneath Java programming, there is some default overhead incurred. An operation like System.out.println synchronizes processing during file I/O, which significantly impacts performance.

In development and test mode, you may want all the logging and debugging to be active, since it could be essential for finding errors. When you deploy your application in a production environment, leaving extensive logging on is just not a viable option. It is a good practice to guard log statements so they are turned on during error and debugging situations only. Do that by using a final Boolean variable, which, when set to false, effectively tells the compiler to optimize out both the checking and execution of the logging code:

static final boolean LOGGING = false;
if (LOGGING) {...}

The Java language provides two types of streams: readers/writers and input/output:

  • Readers and writers are high level interfaces to support unicode characters in I/O operations.
  • Input/output streams provide data access mechanisms at a much lower level; at the byte level.

There is a performance overhead involved with readers/writers because they are intended for character streams, and they encode data to bytes under the covers. Instead, you should use input/output streams whenever you want to manipulate binary data.

To maximize I/O performance, you should buffer read and write operations. If you want to write large amounts of data from a portlet, it is usually a good approach to flush the buffered data partly rather than flushing them completely in a single blow. On the other hand, do not flush the buffer too often.

Synchronization and multi-threading

The Java mechanism that is used to coordinate access to shared objects is called synchronization. The synchronized statement enbales only one thread to enter a code block at a time.

  • During the lifetime of a portlet, the container sends service requests in different threads to a single portlet instance. Avoid synchronization within a portlet because it has a significant performance impact: synchronization reduces concurrency, since only one thread is allowed to run at a time in a synchronized block of code and all the concurrent threads are queued. There is also performance overhead caused by managing the monitors that the Java virtual machine uses to support synchronization. Besides the performance impact, there is also the possibility of deadlocks that can potentially freeze the single portlet or -- even worse -- the whole portal. Deadlock prevention is the programmer's responsibility, because the monitors do not support any deadlock resolution.

  • In cases where synchronization is necessary, the synchronized code block should be minimized. It is crucial to accurately identify which code block truly needs to be synchronized and to synchronize as little as possible. If it is not small enough, you should analyze your code and refactor it in such a way that anything that could run asynchronously is located outside of the synchronized block.

  • Some Java J2SE functionality indirectly uses synchronization. Java collection classes, like Vector or Hashtable are fully synchronized. Java programs pay the costs associated with thread synchronization even when they are used in a single-threaded environment. Newer collections introduced in Java 1.2, like ArrayList are not synchronized. This provides faster access to the data. In situations where you know you need thread safety, use a thread-safe view. Thread-safe views are wrapper classes that add functionality to synchronize the methods of the standard collection. The factory method of the Collections class returns a thread-safe collection that is backed by the instance of the specified collection type:

    List list = Collections.sychronizedList(new ArrayList());
  • Another example of indirect synchronization is the Java I/O libraries. Minimize the use of Java I/O library methods (for example System.out.println())to reduce unnecessary performance overhead.

  • Do not spawn unmanaged threads from portlets. Currently J2EE strongly discourages from trying to spawn new threads in the container. In fact, J2EE specification 6.2.1 programming restrictions states:

    "If application components contain the same functionality provided by J2EE system infrastructure, there are clashes and mis-management of the functionality. For example, ... to manage threads, ..."

    A practical reason for not trying to spawn new threads is that new threads do not have full access to the J2EE context. Further, newly created unmanaged threads undermine the goal of WebSphere Portal to achieve a stable, optimized and scalable run time environment. Therefore, use the asynchronous bean feature (See WebSphere Application Server Enterprise V5 and Programming Model Extensions WebSphere Handbook Series) in WebSphere Application Server. An asynchronous bean is a Java object or enterprise bean that has the ability to submit code to be run on a separate thread -- and asynchronously -- using the J2EE context.


The portlets programming model enables developers to create Web applications that can become part of an aggregated view of several of such applications in a client browser. In WebSphere Portal, such applications can not only coexist on a page (that is, the aggregated view), they are also able to communicate with each other while constructing the page. Thus, the implementation of a portlet can influence the overall perceived performance of a page. If a specific "critical" portlet resides on a page, then it may be worthwhile to invest some effort in optimizing this specifc portlet. You should also have a deeper look at portlets that reside on performance-critical pages.

Backend access

Totally self-dependent portlets are a rare case in real world portals. Such portlets are typically used as an addition to or helper tool for a Web site; for example, a pocket calculator. Such portlets can be optimized in their local code execution path only and should not put too much burden on a running portal system.

A more typical usage of portlets is to offer application functionality that requires access to other data sources or transaction systems that also require execution resources apart from the system where the portlets are originally running. Data is potentially retrieved from and stored on other backend systems over a network. The transaction length, isolation level, and data locking that may occur on the backend system needs to be considered in the overall system design.

Be aware that a single portlet is probably not the only client of a backend system. In fact, there are many clients to such a system in the real world -- and even a single portlet can access the same backend system several times simultaneously. A portlet might execute its code in separate application server threads for different user requests. Therefore, it is worth investigating the access patterns and how transactions and locks acquired by a portlet or other clients can potentially influence the average response time of such a backend system.

If a portlet requires intensive backend system access during the action or render phase, the response time (for finishing these phases) more and more depends on the responsiveness of the backend system. (Waiting for responses from outside the portal server to satisfy incoming requests will introduce latency that cannot be recovered by optimizing the execution path of the portlet code.) A good design for communicating with the backend and an understanding of transactional behavior is often more promising.

To avoid times when a portlet -- and the page on which it appears -- fails to respond because of a collapsed backend system, it can be a good practice to incorporate timeouts in your code; be aware, though, that managing and keeping track of time stamps introduces some processing overhead. If the parallel portlet rendering feature in WebSphere Portal is used (discussed later), time outs are configurable for the parallel render threads.

It is also a good programming practice to reduce the interaction and data traffic to such external backend systems, where possible. To achieve this, portlets can cache information if the freshness criteria of the information permits you to do so. This may reduce the roundtrips for fetching the same data over and over again for each incoming request to WebSphere Portal. It also helps to lower the load on the backend system, since it is not involved in providing the same information so many times. Also, if the data does not need to be transferred over the network, the portlet can potentially be rendered more quickly.

Another way of avoiding roundtrips to the backend system can be to retrieve more data than is actually required to fulfill the current request, but that is known to be required in a request that would otherwise follow. With this approach, however, we still advice against general prefetching if it is not known what prefetched data is actually needed in subsequent requests. A good understanding of typical user interactions with the portlet application is needed for a proper design of this nature. Keep in mind that retrieving data in advance has an impact on memory usage of the portal JVM. (See Code design for performance and scalability. ) Such design approaches might require an interface change of the backend system, but it could save considerable processing time that might make such a change worth it.

For caching, WebSphere Application Server offers a dynamic caching feature with its DistributedMap interface to portlets. (See WebSphere Application Server 5.1 Information Center for more information.)

The session and other data stores

Keeping and maintaining data for a portlet that should have a lifetime longer than a single request is a typical portlet programming task. Using PortletSession is often the first approach that is considered. While PortletSession is convenient to use from a programmer's perspective, it requires resources for managing sessions from an application server perspective. The problem can get worse if the session contains more and more data and thus requires more memory.

If the session is configured to be persistently stored in a database, or is configured for memory-to-memory replication (that is, WebSphere Portal is configured for failover in a clustered environment), then the session may become serialized any time its contents changes.

The time it takes to serialize and deserialize session data when they are being written to a remote copy can become considerably large. In rare cases, some objects stored in a session can be marked transient. This will reduce the serialized size of a session, but will not change the in-memory size, which also has an impact on how efficiently an application server can handle sessions.

Large session objects decrease the JVM memory available for creation and execution of application objects. As a result, performance can degrade as the decreased available heap memory leads to more frequent garbage collection.

Another factor is that the in-memory lifetime is always longer than the required usage lifetime, and thus the number of sessions occupying space in the Java heap usually is greater than the number of active users. A session expiration time is configurable in WebSphere Application Server and is indeed required to avoid a case when a user has to log in again after only a few seconds of inactivity. The release of a session is at the responsibility of WebSphere Application Server and the portlet container.

The serialized session size should be smaller than 4 KB because WebSphere Application Server can store such sessions with an acceptable database performance overhead, and it takes less time to transfer such sessions over the network. If the session size grows beyond 32 KB, the database must use table cells configured for binary large objects, which require physical disk access (for most supported databases) if such a session is retrieved or written to the database.

As a first consequence, the creation of sessions should be avoided wherever possible from an application point of view. On most public and unauthenticated pages, sessions are not usually required. Interacting with a portal on such a page is possible via so-called render links which, by definition, do not change the server side state. Render parameters are maintained by the portal for each portlet for all subsequent requests to that page. To avoid having a JSP create a session by default, the page session directive in the JSP should be set to false:

<@ page session="false"%>

Otherwise, this JSP will create a session if one does not exist.

The following Java code fragment shows how you can make sure that an incoming request joins an existing session, rather than unconditionally creating a new one:


With the parameter value of false, a session is not created if no session existed before. If a session did not exist before, it is probably not appropriate to create one in a portlet just for the purpose of storing data in it.

As a second consequence, the session should not be misused as an all-purpose data store mechanism. Remember that the goal is to keep the session size as small as possible. If keeping some data in memory is advantageous due to the design of a portlet, then a cache might be the right answer. Cache entries can be scoped with the session ID to keep a relationship between the session and the data that is to be kept in memory. Keep in mind that this kind of cache will not be cluster aware in case of a fail over; this is sometimes an acceptable trade-off. If the data is recreateable from other data available to the portlet, then the session scope requirement of cached entries is questionable.

In many cases, storing large objects in the session can be circumvented by just storing a key in the session and using this key as a reference to lookup a larger object in some other data structure. Another option would be to use a more compact representation of the same information and put that object into the session.

Furthermore, the portlet design needs to carefully consider what is actually stored in a session. The session is generally intended only for storing the conversational state of the user interaction with the portal application (for example, the contents of a shopping cart in a Web shop portlet). This kind of data is user specific and cannot be recreated by any other means. In WebSphere Portal, this type of data handling is called the session state.

If session state is not really required, there are other data storage options available for portlets:

  • During the action phase of a portlet, render parameters can be set for the portlet's subsequent render phase. Render parameters are used by a portlet to render its view specific to a specific set of values. Render parameters are maintained by the container from request to request, even if interaction occurs with another portlet. In WebSphere Portal, this type of data handling is called the navigational state.

  • The PortletPreferences API can be used for storing data for a portlet if such data will be kept across user sessions. Keep in mind that this API is not intended to replace general purpose databases. In WebSphere Portal, this data handling concept is called persistent state.

  • The PortletConfig API lets a portlet read its configuration, which is provided by the developer using the portlet deployment descriptor; this is valid for all users of the portlet.

  • The PortletContext API enables the storing of attributes that other portlets in the same application can also access.

Consider other choices than the session for storing data created and used by a portlet. Avoid replicating data into the session that can be recreated from sources other than through user interaction.

Render links and action links

There are advantages to using render parameters other than just to address a specific portlet view.

If an action parameter for a portlet is detected by WebSphere Portal, then special action phase handling must be invoked, making it advantageous to avoid using action parameters. However, be aware that processing render links must not change the server side state of a portlet. The only sanctioned way to change the server side state is to use action links -- and for a transactional type of request, action links are the best choice.

There are many instances where render links can be used instead of action links. For example, consider a newspaper portlet that can show specific pages with the use of Previous and Next buttons. Stepping through the newspaper pages does not necessarily change the server side state, which in this case is the entire information contained in the newspaper. To address the next page of the newspaper, it would be sufficient to encode the next page number into the render link for the shown button. The portlet can decide which page to render based on the page number given in a render parameter.

Also, using render links over action links enhances the chance of leveraging a cache infrastructure, be it a browser cache or a proxy cache, since each rendered view is addressed by a separate URL. The URL is the only key used to access a specific generated view in such a cache infrastructure.

Portlet features

The next sections discuss some of the portlet tuning features available with WebSphere Portal that should be considered by developers, and could influence the implementation technique chosen. Some required settings need to be provided with the portlet's deployment descriptor, and since these items are also supplied by a portlet developer, they are therefore considered custom code.

Enabling portlets for parallel rendering

WebSphere Portal offers the option to have portlets on a page rendered in parallel. This feature is not completely "for free" because computing resources are required to maintain and manage the different threads that are used to render each single portlet.

Parallel portlet rendering may be advantageous in cases where many backend systems are involved that each produce their own latency while rendering a single page. For example, consider a portal page that contains a number of portlets, each of which accesses a different backend system. In serial rendering mode, the overall latency for retrieving the required data from all the backend systems would have to be calculated as the sum of the individual latency times. In parallel rendering mode, the latency would be the maximum of all individual latency times.

If portlets do not use a backend system too often, the overhead for enabling parallel portlet rendering can become greater than the benefits gained from this feature. If portlets on a page can be rendered independently of a backend system, they only need CPU resources local to the portal server machine. In this case, the page render response time will not be improved.

Parallel portlet rendering can be enabled for each portlet separately with the graphical UI, or with the deployment descriptor, or with WebSphere Portal's XML access interface. On top of that, there is a global property value that generally turns parallel portlet rendering on and off.

To properly answer the question of whether or not to enable parallel portlet rendering for a portal, there are several things to consider; for example, the number of backend systems involved for rendering a page, typical page structure, the average number of portlets on a page that exploit parallel portlet rendering, and so on. Such questions may not necessarily be answerable in advance by a portlet developer, but certainly the developer can make sure that a portlet is enabled for parallel portlet rendering if that makes sense up front.

Caching in the portlet container

Portlet-based Web pages are aggregated dynamically because of their ability to deliver dynamic content in a personalized manner. This flexibility comes at a cost. Web site response time increases because of the additional work that has to be done to generate these pages upon request.

New caching technologies improve the performance of dynamic page generation and reduce system load. WebSphere Portal supports fragment caching (also known as servlet caching) using the WebSphere Application Server dynamic cache to keep portlet output in the cache. Requests for a cached portlet retrieve the content from the cache instead of the portlet. The invalidation of the fragment cache can be accomplished by specifying the expiration time in the deployment descriptor. Further, the fragment cache entries are invalidated during the action phase of the portlet.

There is no time-consuming installation and integration work needed to activate fragment caching. The cache is enabled and disabled using simple XML deployment descriptor files and using the WebSphere Application Server administrative console. (See the WebSphere Portal Information Center for details on enabling the servlet caching in WebSphere Application Server.)

To make use of expiration-based caching, portlets must define the duration of the expiration cache in the deployment descriptor portlet.xml (for standardized portlet following JSR 168 specification):

  • A positive number defines the number of seconds a cache entry exists in the cache.

  • A value of -1 indicates that the portlet cache never expires.

  • A value of 0 indicates that caching is disabled for the portlet.

The cached entries must not be shared across all users of the same portlet. This caching is per portlet per user.

For a JSR 168 portlet that has defined an expiration cache in its deployment descriptor, the portlet window can modify the expiration time at run time by setting the EXPIRATION_CACHE property in the RenderResponse, as follows:


This approach will work for complex portlets that experience high computation time while calculating their response or request data from a backend, such as from EJB components or a database. In the case of simple portlets, fragment caching should not be enabled. WebSphere Portal uses extra execution resources to calculate the internal cache key for the fragment cache. Performance can regress for simple portlets because cache key calculation becomes more expensive than recalculating the portlet response again.

Fragment caching is not useful for portlets that are truly dynamic in nature; for example, real-time-based portlets that need to collect current data from other data sources on each request, or portlets that change their response markup on every request. This would result in a high number of cache invalidations and hence there would be no gain in performance. Therefore, the portlet should be enabled for caching only if the output of the portlet will be valid for some period of time before it is updated.

Caching in remote caches

With the unique adaptive page caching feature, WebSphere Portal offers the possibility of dynamically caching generated pages in caches external to a portal server (so-called remote caches), if all the page components indicate that they are cacheable. If completely rendered pages are served from remote caches, a roundtrip to the portal server can be avoided and response times for such pages can potentially be as fast as if they were served from static Web sites.

For more complete details on remote caching see the article Develop high performance Web sites with both static and dynamic content using WebSphere Portal V5.1.

Portlets (as well as themes) can contribute their specific remote cache information to the overall remote cache information for a completely rendered page. The remote cache information is a data structure consisting of the information about the cache scope (whether it is cacheable in a shared or non-shared cache) and the expiration time (how long the content is considered valid). The remote cache information of a portlet can be provided via the deployment descriptor or via the WebSphere Portal GUI. Beyond that, a portlet is also able to provide remote cache information at render time for each portlet window, as illustrated in the following code:

. . . 
import javax.portlet.RenderResponse;
. . .
/* Do rendering */
public void doView(RenderRequest renderRequest, RenderResponse renderResponse)
   throws PortletException, IOException {
   /* Some code might happen here */
   . . . 
   /* Publish a dynamic expiration time during rendering */
   /* Publish a cache scope value of "shared" during rendering */
      RemoteCacheInfo.KEY_SCOPE, RemoteCacheInfo.Scope.SHARED_STRING );
   /* Some other code might happen here */
   . . . 

How you set remote cache information depends on the "freshness" requirements and the scope of the rendered view. Keep in mind that if rendered pages are served from caches, the request might not even get to the portal server.

Custom portlet developers should consider exploiting remote caching if caching is available in the infrastructure.

Themes and skins

In portal terminology, themes are sets of JSPs that determine the look and feel of your portal application. Since themes are made of JSPs, the tips offered in the JSP section also apply here. This section details some possible performance pitfalls with the set of JSP files that get combined into a theme.

Typically, a theme consists of many different JSP files, each delivering the content for a certain area of the screen. While it is possible to dynamically include JSPs, it is common -- and generally recommended -- for JSPs to be statically included in other JSPs.

Since many JSPs might be included into others at compile time, the resulting Java source and servlet byte code files can potentially be very large. In general, there is no performance problem with having a large class file, but it is possible that compiling the JSP sources to a class can fail due to size restrictions incorporated in the Java programming language. For example, methods in Java cannot be larger than 64 KB. Large and complex themes can easily reach this limit and will then no longer compile. In this situation, you have three options:

  • Substitute dynamic includes for some (but not all) static ones.

    As mentioned in the JSP section, this trades performance for being able to compile the JSPs at all. From a performance standpoint, this is the least preferred resolution, though it is the easiest to implement.

  • Try to restrict the usage of scriptlets in the JSPs.

    WebSphere Application Server can apply optimizations to code that only calls tag handlers, which can help maintain the 64 KB limit.

  • Clean up your JSP code.

    Very often these files contain more code than is actually necessary. Even removing HTML comment lines or white spaces, or moving JavaScript code into separate files can save sufficient space.

Themes sometimes take over complex tasks within the application. However, you need to be careful here. Remember that the theme will be rendered upon every single request to your portal, so do not introduce expensive computations that put a very high burden on the system.

Be especially careful with mimicking portal functionality. For example, themes could iterate over large numbers of pages in the portal application, and then filter these and only display a navigation structure to the user that includes just a few of the pages the theme requested from the portal APIs. In this case, much of the processing that happens inside the portal will be lost since the results are discarded afterwards. Filtering based on portal access control or personalization rules would be more efficient here.

Furthermore, try to limit the number of links to portal resources from your portal pages. Each URL that the portal has to generate puts additional load on the system. If you need application themes with huge numbers of links, try to cache the contents of some of these pages so that it is not necessary to recalculate all the links on every request.

Themes are also part of the remote caching infrastructure in WebSphere Portal. The remote cache information of a theme is a set of specifically named meta data that can be set via XML access, as shown in the following example:

<!-- Theme "shared" scope and 40 seconds cache expiration -->
<theme action="update" active="true" objectid="xmplTheme" uniquename="wps.theme.example">  
   <parameter name="remote-cache-scope" type="string" update="set">SHARED</parameter> 
   <parameter name="remote-cache-expiry" type="string" update="set">40</parameter>     

A theme cannot provide any render time remote cache information.

WebSphere Portal supports the notion of high-performance skins. These skins are special insofar as they are not generated based on JSPs; their output is created from precompiled Java classes. Of course, this type of skin is less customizable; you can only modify the stylesheet information and included images. However, if performance is the most critical factor for you, you should think about enabling high-performance skins for certain elements on your page or for certain portlets. (See the Information Centers in Resources for more information, including various hints that can help you program high-speed skins and themes.)


Tools are available to assist you in all stages of WebSphere Portal application development and verification. This section describes different categories of tools you can use during different development cycles, and provides a few examples to help you get started with developing and analyzing your custom code.

Development environment

Technically, you can use any kind of text editor to write portlets, themes, and skins, but it's a lot easier using an integrated development environment, like IBM Rational® Application Developer together with the IBM Portal Toolkit. Portlet code samples and basic portal code fragments are also available to help you achieve your first results very fast; the development environment is integrated with a portal server so that you can immediately deploy and test your code.

Performance analysis tools

When your code is ready for deployment, you need to understand its performance implications in detail. There are several steps you can take, and they are summarized below, but there is one general rule that always applies to performance: In most programs, about 80% of the execution time is spent in 20% of the application code. This 20% of code is on the "critical path" and it is these areas that are worth optimizing for performance. For example, the render method of a portlet is much more performance-critical than its init method, since it is called upon in every request.

  • Code profiling should be done in the early stages of development, or as a first performance test after development. Profiling means that execution time information is collected at the method level, often using the JVMPI interface. Profiler results help you identify the critical path of your application; that is, the code that is executed most of the time. Profilers also often give information on object creation rates and memory consumption.

  • Once your portlet has been deployed into a portal, you should test for the behavior of the portlet under load. Stress or load generators, like Rational Performance Tester, Rational Robot, Apache JMeter (as a cost-effective alternative), and others, are load testing solutions to help you accurately simulate the system behavior under production load. These tools collect much information to help you determine whether your system is in good performance shape, including data on request response times, processor utilization, and more.

  • During load testing, you should monitor several performance parameters in your portal environment. IBM Tivoli® Performance Viewer (which is shipped with WebSphere Application Server) can be helpful to monitor resource utilization within the application server.

  • Many problems with portal environments are memory-related. JVM implementations provide two types of information for tools to analyze for performance:

    • Output from the garbage collector, verbose:gc.
    • Heap dumps, which are helpful when hunting for memory leaks.

    Check out IBM alphaWorks for analysis tools for the garbage collector output. heapRoots, on the other hand, is a powerful heap dump analysis aid. The IBM Java Diagnostics Guides also provide helpful information for dealing with portal-related performance issues. See Resources for links to these references.

Very often you will not need this complete set of tools when developing code for WebSphere Portal, but for larger portal rollouts in your production environment it is essential to understand your portal code from a performance point of view.


When creating custom portal code, there are a number of areas that the developer must consider to ensure that portal performance is optimized. To summarize:

  • Focus on improving the critical code path. A critical code path is one that takes a long time to process or is very frequently executed. Find out which methods of which classes are on the critical path. Outside the critical path, optimizations are rather useless.

  • Consider both execution performance and memory allocation.

  • Use appropriate tools to measure and profile your code for the most typical user interactions.

  • Solutions to a coding problem may differ in orders of magnitude with respect to performance.

  • A specific implementation to solve a discovered performance problem must be understood in detail.

  • Consider the backend access pattern while designing your custom code.

  • Do not misuse the session as an all purpose data store for a portlet. There are better ways to handle data for various implementation requirements.

  • Consider exploiting the special features provided by WebSphere Application Server and WebSphere Portal to optimize portlet performance, provided the target environment is also exploiting the same feature(s).



Get products and technologies


developerWorks: Sign in

Required fields are indicated with an asterisk (*).

Need an IBM ID?
Forgot your IBM ID?

Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name

The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.


All information submitted is secure.

Dig deeper into WebSphere on developerWorks

ArticleTitle=IBM WebSphere Developer Technical Journal: Performance considerations for custom portal code