Optimizing N-tier J2EE applications on UNIX operating systems

Analyze, optimize, and employ best practices for performance in Web-based applications

Multi-tier Web applications provide a more flexible and scalable environment for business-critical applications and their 24 x 7 availability requirements. Today's increasingly sophisticated deployments introduce additional complexity in system, infrastructure, and application interaction. Operating system and hardware selection, best practices for application design and coding, and performance and application monitoring can help maximize application performance today and scalability for tomorrow.

Share:

William von Hagen (wvh@vonhagen.org), Systems Administrator, Writer, WordSmiths

William von Hagen has been a UNIX systems administrator for more than 20 years and a Linux advocate since 1993. Bill is the author or co-author of books on subjects such as Ubuntu Linux, Xen Virtualization, the GNU Compiler Collection (GCC), SuSE Linux, Mac OS X, Linux file systems, and SGML. He has also written numerous articles for Linux and Mac OS X publications and Web sites. You can reach Bill at wvh@vonhagen.org.



20 January 2009

Also available in Chinese

UNIX® and UNIX-like operating systems host the majority of the Web servers and Web-based applications on the Internet. Although these operating systems were designed for multi-processing, networking, and performance, today's distributed, n-tier Web applications introduce many other factors that can affect application performance.

Frequently used acronyms

  • API: Application programming interface
  • FTP: File Transfer Protocol
  • HTML: Hypertext Markup Language
  • I/O: Input/output
  • IT: Information technology
  • SDK: Software development kit
  • SQL: Structured Query Language
  • XML: Extensible Markup Language

In the early days of the Web, Web servers delivered static data from HTML and graphics files that were local to a given Web server. Though fine for content that is equally static, business applications need access to dynamic content such as customer information, inventory and pricing information, and other frequently changing information that is stored in a data source other than the Web server's content directory—typically, in an external resource such as a database.

The 2-tier Web architecture addressed these requirements by giving Web servers direct access to other data resources (such as a database) that provide dynamic content. A 2-tier Web application is a standard client-server system in which direct communication between a Web server and an external data source is facilitated by data-interface protocols such as Common Gateway Interface (CGI) and various Web scripting languages. All the connection and business logic for the Web application and associated data sources is encoded in the Web application, and that application has direct access to the data it requires, regardless of whether the data is stored locally or remotely. UNIX and UNIX-like operating systems are natural choices for 2-tier Web applications, because they were designed for multi-processing, support large amounts of physical memory, and include high-performance virtual memory capabilities for use when application requirements exceed physical memory capacity.

Regardless of the power of the underlying operating system and hardware, however, 2-tier applications can be inefficient when handling large numbers of requests or maintaining state information about large numbers of Web clients, because all parts of the application are typically running on the same machine. Using databases and other data sources that are local to the Web server makes replication and back-ups difficult and also increases the load on the Web server. Similarly, storing the database or database-access information on the machine on which the Web server is running can introduce security problems if that system is penetrated and compromised.

The n-tier Web application architecture addresses these concerns by running different services on different machines and using middleware known as an application server to act as an intermediary between Web application logic and remote data requirements. N-tier architectures provide numerous advantages over simpler architectures, including:

  • Increased performance for high-traffic Web servers, because the data that Web applications access is stored on other systems. Not running database (or other data) servers on the same system as the Web or application server reduces the overall load on those systems, increasing the resources that are available to those servers. Similarly, administrative tasks such as backups of remote databases, content-management systems (CMS), and other data sources do not affect the performance of the Web or application server.
  • Better server and data resource management through scalability. Web and application servers can easily be replicated for load balancing purposes. The 3-tier architecture can increase Web application availability by implementing automatic failover among multiple Web and application servers. Connections to remote resources can be managed intelligently.
  • Increased security for remote data, because it is always stored on a system other than the one on which the Web server—and typically the application server—is running. Penetrating and compromising a Web server does not directly expose proprietary and business-critical data.

N-tier architectures rely on the interoperation of multiple computer systems, each of which has its own performance considerations depending on the types of services that it provides. Interaction between these computer systems often requires additional investment in IT infrastructure because of the increased network load and application responsiveness requirements, including additional traffic related to mirroring and synchronizing multiple systems to support redundancy and failover and network-related traffic associated with centralized Network Attached Storage (NAS) and Storage Area Network (SAN) data-storage resources.

UNIX and UNIX-like operating systems have always pioneered the high-performance networking, distributed application, and distributed storage requirements that are the underpinnings of robust, high-performance n-tier Web applications. However, optimizations within n-tier applications themselves can make substantial contributions to the performance of those applications in addition to making them easier to understand, more maintainable, and more scalable.

Best practices coding standards for n-tier applications

Good application design and coding practices are the cornerstones of high-performance, maintainable applications. Adopting a modular design for the components of your application and understanding the interaction between them simplifies debugging, testing, tracking down performance problems, and application updates.

The next few sections highlight some standard best design and coding practices for n-tier applications. These "five commandments" of good n-tier application design provide a firm foundation for your Web-based applications but may not be useful when working with existing n-tier applications that are experiencing performance or reliability problems. The section, Analyzing and optimizing n-tier applications, discusses ways of examining and improving the performance of existing applications.

Use an MVC approach whenever possible

The Model-View-Controller (MVC) framework for application design, first developed in the 1980s, is as relevant and useful today as ever. Separating the presentation of an application (the View) from its business logic (the Model) in addition to the coordination of the two (the Controller) and adhering to that separation simplifies design, debugging, testing, and maintenance.

Design for testability

High-level design principles such as MVC and modular application design localize functionality into code that is more easily tested, because each class and method serve a specific purpose. You don't need to commit yourself to methodology such as agile programming, extreme programming, or test-first design to see the advantages of planning for testing when you are writing your code. It is easiest and most rewarding to include test points, or hooks, when writing an application, because this makes them a robust part of your design process rather than an afterthought that you have to splice into an existing application.

Design for security

Security is critical for business-related, Web-based applications. However, because security can be a pain for developers to deal with, it is often added or activated late in the development process. Whether security is provided by an application server such as IBM® WebSphere® Application Server or an external service, you should develop with security enabled. Common concerns such as performance impact and more complex debugging are typically overstated, and if they do affect your application, it's better to know that long before you deploy the application.

Build in logging

The logic of each component of your application is freshest when you are actually implementing those components. Open frameworks such as Log4J and the Java™ software development kit (JDK) Trace APIs (see Resources) are easy to use and are easiest to incorporate into your code while you're writing it. Integrating logging and tracing into your application simplifies your debugging and also makes it easier for others to monitor and debug applications after they are deployed.

One downside of logging is that each write to a log requires I/O, which can degrade application performance. Packages such as Log4J (see Resources) provide different logging levels to minimize I/O. You can use API calls such as Log4J's org.apache.log4j.PropertyConfigurator.configureAndWatch() method to monitor an additional property file that you can use to dynamically activate different logging levels or to disable logging in running applications.

Eschew obfuscation

Simple, understandable code is easier to debug, improve, and maintain than complex, heavily optimized code. It is rarely necessary nowadays to try to outwit your Java Virtual Machine (JVM) or its compiler. Code that is optimized for specific circumstances or platforms may not work well in other cases and often makes it difficult to extend current functionality.

Analyzing and optimizing n-tier applications

The standard problems in n-tier Web applications are the same as those for any application: long startup times, poor overall performance, poor performance in specific cases, and application failures. Although there is no single cause for any of these problems, the next few sections highlight some common things to look for when searching for ways to improve performance in existing applications.

Performance problems can be related to how an application is implemented, its operating environment, the remote resources that the system uses, or all of these. Heisenberg's uncertainty principle tells us that examining a system changes its behavior, but that's not a good excuse for not using all the tools at hand to address performance issues or to look for ways to improve and optimize performance even if current performance is adequate.

Examine both architecture and code

Many performance problems are related to application architecture rather than specific code problems. Making too many or extraneous calls to external resources can waste time in your application and affect the performance of other applications that use that same resource. Similarly, using standard formats such as XML is a great idea for data interchange but can have a negative impact on performance if your application performs repeated or extraneous transformations.

Check how you are using EJB technology

Enterprise JavaBean (EJB) technology is a managed component within a Java 2 Platform, Enterprise Edition (J2EE) container. The container provides automatic support for managing things like resources, transactions, and concurrency but can also introduce substantial overhead in your applications.

To reduce system startup times and general overhead, use non-persistent session beans rather than entity beans (which automatically maintain state information) whenever possible. In general, you should use stateless session beans rather than stateful session beans wherever possible. (See "Top Java EE Best Practices" in Resources for a great discussion of using different types of beans and much more.)

Identify bottlenecks

If your application performs poorly in general, activate logging to try to identify problem areas through log message timestamps. Logging certainly affects performance, but that impact should be evenly distributed throughout an application that uses logging consistently. You can also use performance-monitoring software (discussed in a subsequent section) to identify problem areas or standard system-monitoring software to identify performance problems that are related to system resource limitations.

Explore alternatives for resource-specific bottlenecks

Database access and responsiveness problems are a common source of problems for J2EE applications. Identify the database driver that you're using, and see if higher-performance alternatives exist.

On the database side, check the queries that your Java Database Connectivity (JDBC) driver is issuing to see whether too many calls are being made or if the calls themselves are non-optimal. Check if the database can be tuned to increase the performance of the types of SQL statements being made. Simple database-side problems such slow disks or missing indices can be the actual culprit.

Use performance monitoring tools

Most application servers offer internal performance-management software that you can use to get a detailed picture of an entire application with minimal impact on its performance. For example, WebSphere Application Server provides a Performance and Diagnostic Advisor that runs in the application server's JVM process with limited impact on the entire system. Similarly, the performance advisor in the IBM Tivoli® Performance Viewer runs in the application server's JVM and collects Performance Monitoring Infrastructure (PMI) data that can help identify both non-optimal application server settings and specific application problems.

Use available system monitoring tools

All UNIX systems provide tools such as ps and vmstat to provide process and virtual memory statistics. Open source tools that identify resource use by process are also available for all UNIX systems. These tools can be useful on both the systems running the application server and remote resources such as databases. For example, high virtual memory use can indicate that a system simply requires additional memory to increase performance. Similarly, constantly increasing memory use can indicate memory leaks that should be addressed.

Leveraging standard Java, JVM, and system optimizations

Each new release of a JVM improves performance and reliability. Optimizing Java code is more difficult than optimizing code in traditional languages, because most JVMs can react to dynamically changing conditions and profiling feedback. Still, you can explore a number of standard approaches to J2EE, Java technology, and JVM optimization to improve application performance.

The next few sections highlight common optimizations that can provide low-level improvements in application performance. Taking advantage of standard J2EE, Java coding, and JVM optimizations can provide basic improvements that can ripple upward throughout your application and provide significant performance improvements.

Benchmark object allocation and object reuse

Slow object allocation is a traditional complaint about J2EE applications and Java technology in general. This was certainly true once but is less so in the latest generations of JVMs. Regardless, it's still a good idea to investigate how your JVM performs in specific circumstances.

Allocation overhead depends on available resources and the performance of a specific JVM and J2EE implementation. It's a good idea to write some simple benchmarks to compare the cost of allocating new objects against that of reusing existing objects in your J2EE/JVM environment. This information won't help existing applications unless you update them but can be useful when writing new ones.

Free everything you can

Memory leaks cause application size to increase over time and are especially significant in Java applications, where running out of heap space triggers garbage collection in the JVM. Use the system- or performance-monitoring tools discussed earlier in this article to monitor your application's memory use, and review your code to ensure that you always free allocated objects when you are done with them. Some memory leaks can be insidious, such as those in Java Collections classes such as Hashtable and Vector, which can retain references to objects after all other references have been deleted.

Minimize explicit garbage collection

Today's JVMs are very smart about when they need to perform garbage collection and what objects they can reclaim. At the same time, garbage collection is a JVM-wide operation that can have a significant performance impact. If you want to do explicit garbage collection to reduce your application footprint, attempt to implement it so that it is done under low-load conditions or at off-peak hours.

Tune system garbage collection

Most modern JVMs, such as IBM Java version 5.0 and later, support multiple garbage collection policies. Check your JVM for the policies that it supports to see whether using a different policy can improve the balance between throughput and the pauses that garbage collection introduces.

Investigate JVM heap tuning

Heap tuning controls the amount of memory that an individual JVM and the application server instance that is running in it can use. Heap tuning involves two parameters: the initial size of the heap and its maximum size. The maximum heap size should always be small enough so that it remains in physical memory—Java performance degrades quickly when the heap begins swapping to disk. It must be low enough to contain the heap within physical memory. A good general rule is to set the initial heap size to 25 percent of the maximum heap size.

Maximize system memory

Most UNIX systems support a large amount of memory. Java applications and the JVM itself are memory intensive. Maximizing system memory enables you to increase the size of the Java heap while still guaranteeing that it will remain in physical memory. Maximizing system memory generally improves system performance by minimizing swapping.

Minimize running processes

Minimizing the number of processes running on your critical systems should be a standard part of the deployment process, but it's easy to overlook. Although more common on Linux® systems, many UNIX systems start server processes or offer services that you may not actually use on those systems. Common, easily overlooked examples are services such as FTP and getty processes that are waiting for serial logins that will probably never occur nowadays.

Checking your systems to ensure that they are not running extraneous processes and eliminating any that you do not actually need frees processor and memory resources for your business-critical applications. Minimizing running processes has the additional side effect of increasing security by reducing or eliminating system access points.

Conclusion

Optimization is typically thought of as something that you do after the fact, when applications have been written and are working correctly. That is one approach, but it's typically a poor one. Good design decisions and best practices in application design and implementation provide a sort of preoptimization by eliminating many of the coding issues that might have required optimization.

Donald Knuth once noted that, "Premature optimization is the root of all evil." Although this is often true, performance analysis and investigating opportunities for optimization should be a part of every application's deployment and maintenance process. This article began by discussing design decisions that you can make up front to produce applications that perform well, are easily maintained, are easily tested, and provide hooks to simplify debugging if problems arise. Subsequent sections discussed techniques for identifying and addressing problem areas in existing applications and highlighted standard J2EE, Java coding, and JVM optimizations that can improve the performance of both new and existing applications.

Resources

Learn

Get products and technologies

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into AIX and Unix on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=AIX and UNIX
ArticleID=365388
ArticleTitle=Optimizing N-tier J2EE applications on UNIX operating systems
publish-date=01202009