This month we ask IBM® WebSphere® performance expert Ruth Willenborg to answer your questions about WebSphere Application Server performance, including Version 5 performance features and best practices for performance tuning. WebSphere Application Server is a Java-based Web application server, built on open standards, that helps you deploy and manage applications ranging from simple Web sites to powerful e-business solutions. Ruth Willenborg is manager of the WebSphere Application Server Performance team. She is co-author of Performance Analysis for Java Web Sites, Addison-Wesley, 2002. For more information about WebSphere Application Server, see the WebSphere Application Server zone.
Ruth would like to thank the WebSphere Performance team for their help in preparing this article.
Question: What is your best suggestion to monitor a production environment? Has the Resource Analyzer completely stopped? What product replaces the Resource Analyzer? (submitted by rsbohra)
Answer: Resource Analyzer is available with WebSphere V5. It has a new name, Tivoli Performance Viewer (TPV). The Tivoli Performance Viewer is free and included in the WebSphere installation. See "Monitoring performance" with Tivoli Performance Viewer (formerly Resource Analyzer) in the InfoCenter.
For a complete 7x24 production monitoring solution, including capabilities such as threshold alerts and historical reporting, I recommend looking at Tivoli and our partner monitoring solutions.
Question: What kind of performance advantage does the in-memory (JMS-based) session replication provide over the older database-based sessions? Does IBM have any numbers that compare JMS vs. database-based sessions? (no name submitted)
Answer: WebSphere V5 adds an option for memory-to-memory HTTP session replication. This is in addition to the database session replication capabilities. The memory-to-memory session replication capability was requested to provide a high availability deployment alternative without requiring a highly available database. We did not expect significant performance differences between memory-to-memory and database.
In our lab studies to date, the performance of the two alternatives is virtually identical. Both solutions require the same resources to serialize the session object and then write it out. Regardless of whether your deployment choice is memory-to-memory or database, the most effective technique for improving session replication performance is to keep your HTTP session objects small!
Question: We have a Web application based on html/jsp/javabeans. All the HTML pages have the JSP content. Since the components are built with J2EE architecture and deployed on WebSphere 4.0.6, what will be the best strategy for Separating Static and Dynamic Web Content while deploying on the Remote HTTP Web Server and Application Server in the production environment? (no name submitted)
Answer: In a WebSphere 4.0 environment, there are multiple strategies for separating static and dynamic content. The best strategy for you depends on the trade offs you want to make, including performance and administrative convenience.
Two alternatives include:
- Serving all content from WebSphere (using the file serving servlet).
- Separating the static content and deploying it in the HTTP server.
From a performance perspective, serving the static content directly from the HTTP server should be significantly faster. If you are using the IBM HTTP server, also look at the Fast Response Cache Accelerator feature available for Windows® and AIX® platforms.
These two alternatives, as well as using a caching proxy, are discussed in the article, Handling Static Content in WebSphere Application Server.
When you move to Version 5, there is an additional capability to consider. WebSphere caching capabilities were extended to the Web server plug-in, with an additional link provided into the IHS accelerator cache. In the Version 5 scenario, consider setting up WebSphere to handle the first request for static content through the file-serving servlet, and then place the content into the accelerator cache. Depending on the amount and size of static content, this alternative may perform as well as manually splitting the content, without the administrative effort to split the content.
If you do separate static and dynamic Web Content in Version 5, see "Hints and Tips" in WebSphere Application Server V5: Separating Static and Dynamic Content.
Question: How can I deploy a Web application based on Java 1.4 over WAS 5.0? (no name submitted)
Answer: WebSphere V5.02 was shipped in July 2003 with support for Java 1.4.1 on the client. However, WebSphere V5 does not support Java 1.4 on the server. Stay tuned for more on Java 1.4 - we currently have a limited beta underway.
Question: Is there still a container optimization for remote interfaces that are called in process? We want to implement the remote interface only for maintaining our distribution options, but want don't want the performance hit. WebSphere used to optimize this prior to local interfaces. (no name submitted)
Answer: Yes, WebSphere V5 continues to support this optimization:
- You can enable it through the Administrative Console Page under Servers -> Application Servers -> ORB Service.
- Enable the Pass by Reference setting.
- From the command line scripting, the property is
WebSphere V5 also supports J2EE 1.3 Local Interfaces. In our lab performance tests, we see a significant boost from using the Pass by Reference option, with another smaller boost from explicitly coding to local interfaces. If you want to maintain your distribution options, using the Pass by Reference flag is an excellent solution.
Question: If I develop my own administration program to monitor WAS ND servers and mbeans, how does it affect the performance of WebSphere processes? My Admin client uses SOAP/RMI to connect to the deployment manager. (no name submitted)
Answer: The performance impact of your administration program depends on how frequently you monitor, and how much data you retrieve. For example, the Tivoli Performance Viewer (included with WebSphere) uses SOAP/RMI to connect to the deployment manager. In our lab tests, the combined overhead of both enabling the PMI data and accessing the data through SOAP/RMI is only about 3%. This is with the PMI standard level and running TPV to monitor PMI data every 10 seconds.
If you are accessing PMI performance data, our lab tests show retrieving all the information in one call performs better than using multiple remote JMX calls. For example, if you want to get the PMI data for all the servlets, call PerfMBean once for all servlets, instead of making separate calls for each servlet.
Question: Why are there two different performance advisors? (no name submitted)
Answer: WebSphere 5.02 introduces two performance advisors, a Performance Advisor imbedded in the Tivoli Performance Viewer and the Runtime Performance Advisor. How and when you use these two advisors are different.
The Runtime Advisor runs in the application server process and issues recommendations as part of the standard WebSphere warning messages. The recommendations are for tuning WebSphere resources for better performance. You can use this advisor during stress testing or in production. The administrator checks for tuning warning messages and then makes the recommended changes.
The Tivoli Performance Viewer Advisor is designed for use on request, in conjunction with TPV. It is best suited for testing and tuning your application. You can explicitly request tuning advice at different points during your test. You can also capture and replay a log running advisor at different points. The Tivoli Performance Viewer uses a superset of the advice provided by the Runtime Advisor. Both advisors provide tuning advice for WebSphere resources, such as pools and caches. Advice that is expensive to calculate (and therefore not appropriate to run regularly in the runtime) is only available through the TPV Advisor.
Question: What PMI metrics are the best to use for looking at performance?
Answer: PMI has over 100 different performance metrics. When looking at performance, I like to start with four key areas: system performance, application performance, flow of work through WebSphere, and JVM performance. My "Top 10" counters for this are:
System performance (you can get these from operating system utilities)
- Memory (paging)
- Servlet/EJB response time
- Servlet/EJB requests
- Live HTTP sessions
Flow of work through WebSphere
- Web server threads (you need to get this from the HTTP server, not PMI)
- Web container thread pool, ORB thread pool
- Database connection pool
- JVM heap (for potential memory leaks and garbage collection dynamics)
Question: What are the best practices for measuring performance? (no name submitted)
Answer: The best practices for measuring performance include:
- Take your measurements during steady-state (do not include ramp-up/ramp-down times).
- Only make one change at a time and repeat tests after making changes.
- Make sure your results are repeatable. Do not rely on data from a single run. You will typically want to do at least three runs to make sure you are getting consistent results.
- Your runs need to be long enough to get repeatable results (typically 10-15 minutes). Investigate large variances between runs and try to keep the run-to-run variance below 4%.
- Test on an isolated network.
Our book, Performance Analysis for Java Web Sites, includes details on performance test best practices and tools. In particular, see Chapters 7, 8, and 11, and the checklists in the Appendix.
Question: Is the Runtime Performance monitor supported for WAS 5.02 Network Deployment yet? (no name submitted)
About Meet the experts
Meet the experts is a feature on the developerworks WebSphere site. We give you access to the best minds in IBM WebSphere, product experts and executives who are waiting to answer your questions. You submit the questions, and we post the answers to the most popular questions.
- Performance Analysis for Java Web Sites, Addison Wesley, 2002, ISBN 0201844540.
- WebSphere Application Server Performance Web site
- Redbook: Monitoring WebSphere Application Performance on z/OS