Planning for optimized performance
Use these best practices to improve the performance of applications on the IBM TRIRIGA Application Platform.
Optimizing performance in your environment
These best practices provide optimal performance in the lab test environment, but your environment might require different settings. Use these settings as a guideline to monitor and tune to your IBM® TRIRIGA® environment.
Because customers who deploy TRIRIGA across large, global enterprises face some of the greatest challenges, this document focuses on improving performance in advanced enterprise configurations.
Virtualized environments
Deployments in virtualized environments such as VMware might not give the same performance benefits from these best practices. Using shared resources in virtualized environments does not provide optimal performance. Use dedicated CPU and memory resources instead.
Also, internal testing shows that putting the TRIRIGA database in a virtualized environment such as VMware can have significant performance impacts. These impacts increase rapidly with larger workloads.
For more information, see Database server virtualization in Database server tuning and maintenance.
Support for performance-related problems
Performance-related problems are often the result of environmental factors and outside the scope of support. Although IBM makes every effort to assist clients in understanding the cause of performance issues, performance-related issues are not officially something support can resolve. Understand the contents of this best practices document and recognize that failure to adopt the recommendations in this document prevent further investigation by support in resolving any performance-related concerns.
You need a good technical understanding of TRIRIGAto understand how TRIRIGA interacts with technology. The Java™ code in TRIRIGA is called the TRIRIGA Application Platform. This code supports the associated technologies such as databases, application servers, and browsers that are supported for each version. When the application platform is developed, the platform is used to develop TRIRIGA applications including Lease Management, Space Management, Reservations, Operations and Maintenance, and Project Management. These applications are stored as metadata objects that take advantage of the functionality that is built into the development platform. This metadata that creates applications is stored in the database.
To understand performance tuning with TRIRIGA, you must understand that there are core technology tuning tips that are related to the platform, and there are application-specific tuning tips that are related to the metadata coding that creates the applications. When the network, hardware, and middleware are optimally tuned, the next step is to tune the platform and finally the application. If clients customize workflows or extended formulas, IBM support focuses first on configurations that might change the use of database indexing, SQL queries, formula usage, data flow, or other aspects that are not consistent with best practices and are not tested. If custom configurations are identified as a cause of any issue in product functionality or performance, a developer, business partner, or IBM services engagement might be required to resolve the issue.
The scope of these best practices applies to all versions of IBM TRIRIGA 10 and 11 and all versions of TRIRIGA Application Platform 3 and 4, which are collectively referred to as TRIRIGA in the remainder of these best practices.
Identifying TRIRIGA versions
The platform version is the smaller number of the two numbers that represent TRIRIGA. The 5.x.x is the platform version and the 11.x.x is the application version. IBM uses a standard of Version, Release, Fix Pack (VRF) as its numbering system so that version 4, release 5, fix pack 3, is 4.5.3. The application version numbers are seven digits larger than platform versions. While many of the recommendations in this document also apply to earlier versions of the TRIRIGA platform and applications, this document is intended specifically for the latest versions.
The platform is always designed to be backward-compatible with applications that were previously developed. For example, an application is developed by using platform 4.5.0. Upgrading the platform to 5.0.0 maintains compatibility with the older application.
Product documentation
For more information about TRIRIGA, see the IBM TRIRIGA product documentation and IBM TRIRIGA Application Platform product documentation in IBM Documentation. Helpful information might be found in the latest Release Notes for your installed IBM TRIRIGA product version, as they might contain specific information that overrides topics in this document. In addition to the release notes, thoroughly review the Installation and Implementation Guide. For information about IBM TRIRIGA supported products and platforms, see the IBM TRIRIGA Supported Versions and IBM TRIRIGA Application Platform Compatibility Matrix.
Best practices as a cooperative effort
This information is the result of a cooperative effort between IBM employees, IBM Business Partners, and a core group of enterprise customers of TRIRIGA. This information was developed both in response to customers and in concert with them. The cooperative effort is ongoing. The goal of these best practices is to provide you with information that can help you to improve the experience of system performance for your users.
Quality assurance and testing
Each release of TRIRIGA provides new features for your users. IBM also strives to increase performance, reliability, and speed with each release. One way that IBM enhances performance is to provide tools and features that are designed to improve system performance.
The IBM Quality Assurance (QA) team tests TRIRIGA. Testing is performed during the development cycle and again before the product is released. Testing continues after TRIRIGA is released. Application and system functionality testing is performed as is system performance testing. The quality assurance team tries to emulate many of the system setups that you might use. However, a laboratory testing environment can never fully anticipate all the scenarios that the product encounters in your environments. The testing environment undergoes constant review to determine how best it can be scaled to meet the demands of large enterprise deployments.
Customer involvement and feedback are vital to improving quality assurance testing. You can provide feedback for these best practices in IBM TRIRIGA Chat.
Factors in system performance
The system performance depends on more than the applications and database. The network architecture affects performance. The application server configuration can impair or improve performance. The way that you deploy TRIRIGA across servers affects the way that the products perform. Many other factors are involved in the user experience of system performance.
This document addresses the following topics:
- System architecture setup
- Application server configuration
- Reporting
- Integration with other systems by using the integration framework
- Network issues
- Bandwidth
- Load balancing
- Database tuning and SQL tuning
- Client workstation configuration
- Miscellaneous performance improvement tips
- Platform tuning
- Application configuration and customization
- Troubleshooting
Performance testing and tuning in implementation project plans
When planning for a TRIRIGA implementation, schedule the following phases:
- Performance testing
- Performance tuning
- Contingency iterations of testing and tuning for issues that are discovered
- Fallback plans to mitigate the risk if performance testing uncovers issues
As a part of project management, certified Project Management Professionals (PMs) must be fully versed in project risk management. Project managers should develop a series of actions in case identified risks occur. Performance testing and tuning are inherent risks in any implementation. Poor preparation can lead to unrealistic deadlines and even project failure. It is important to add ample time to perform multiple iterations of performance testing and tuning and explain to the stakeholders what the risks are, in terms of project deadlines, if they do not accept that multiple iterations and extra time might be needed to tune the system and application.
It is also important to understand the difference between a contingency plan and a fallback plan. A contingency plan is executed only when identified, accepted risk events occur and is developed during the plan risk responses process. A fallback plan is developed to deal with risks if the primary contingency plan is not effective for the known identified risks.
For example, if, in the performance testing phase, a performance issue is identified with a particular process is uncovered. The tuning phase investigates the performance issue by examining the performance logs generated by running the process and analyzing the information for where the bottleneck or hot spot is. When the analysis is done, the application metadata can be updated, following best practices for application performance. If an application cannot be easily updated, then a contingency plan is necessary. A larger rearchitecting of the application would be done to both deliver the business requirements and solve the performance issues. The fallback plan is needed if the rearchitecting fails to improve performance, and the initial assumptions of the system size are brought into question, and new faster hardware is required to deliver acceptable performance.
Infrastructure pyramid
Visualize TRIRIGA at the top of a pyramid that consists of network, hardware, and software services. When troubleshooting any performance issue, it is crucial to consider the entirety of the stack as potential variables that must be diagnosed. If the top is tuned without consideration for the reliant levels below it, performance problems might continue. The pyramid comprises the following infrastructure within which TRIRIGA operates:
- TRIRIGA runs within the layer of software services. The entire system must be sufficiently robust and well-tuned.
- Software services such as application servers, report servers, and database servers run on the operating systems.
- Operating systems run on the hardware servers.
- Hardware servers operate within the network.
- Network architecture supports the entire system infrastructure.
Improving system performance requires attention to all elements of the infrastructure. Your databases can be tuned perfectly, but if the network is slow, you might still have system performance issues. Or, you might have a fast local area network, but if client workstations lack memory and processing power, users experience poor performance.
Customers report that there is normally more than one issue to address. Improving performance involves adjusting many things. Together, these changes can combine to yield significant performance improvement. Making improvements in one or two areas can boost performance. For example, upgrading user workstations can often greatly improve the application experience, even if other problems exist in the overall deployed environment. As you continue to improve different areas of the environment, system performance improves incrementally.
