dmmckinn 1200006SCS Visits (2134)
If you are looking to minimize or avoid resource related issues on your Jazz applications, you may want to check out the Known Resource-intensive Scenarios wiki. The wiki aims to capture user and system scenarios across the ALM portfolio that can potentially drive relatively higher load on a Jazz application. Such scenarios can lead to server debt (such as out-of-memory errors) if they are run during peak times on systems that don't have sufficient spare resources available. [Read more…]
dmmckinn 1200006SCS Visits (5164)
Complex distributed systems require an automated, layered approach to monitoring. Most organizations monitor key parts of their infrastructure but monitoring typically stops at the application layer because it can be difficult to know what and how to effectively monitor applications without an understanding of the application architecture.
In the CLM
dmmckinn 1200006SCS Visits (5782)
If you are looking into server load issues on your Jazz application server, you may want to review the Known Resource-intensive Scenarios article. The article aims to capture user and system scenarios across the ALM portfolio that can potentially drive relatively higher load on a Jazz application. Such scenarios can lead to server debt (such as out-of-memory errors) if they are run during peak times on systems that don't have sufficient spare resources available. These scenarios are qualified or quantified to make them easier to understand. Where possible, best practices are provided that can minimize or avoid the issue altogether.
AcdntlPoet 2700019V2G Visits (9186)
Performance Troubleshooting - This page is the starting point for information and techniques to help diagnose performance issues when using the IBM Rational Collaborative Lifecycle Management (CLM) family of products (Rational Team Concert, Rational Requirements Composer, and Rational Quality Manager). The situations listed in this deployment wiki article are divided into two groups: situations likely to be encountered by administrators, and situations likely to be encountered by users of the CLM products. If you are new to troubleshooting, see How to start a troubleshooting assessment, and Data Collection Basics. Also, take the time to review Common Jazz hardware configuration and performance impact overview.
AcdntlPoet 2700019V2G Visits (8944)
Maximo, Capture diagnostic info for performance problems- Capturing WebSphere level diagnostic information to help diagnose Maximo performance issues by May On
doboski 310000SJR4 Visits (6956)
Have you have ever had performance issues with loading data into your location hierarchies? Or making large changes to hierarchical data? Are you reorganizing your company, adding new departments, moving or combining others? Is it taking a long time to process these changes?
When an update is made to the hierarchy, the entire tree is rebuilt. So if you have multiple updates you are making, it is rebuilding the entire tree. If you have a rather large tree with many layers or branches, this could be quite time consuming and frustrating while you wait for it to update. Do not fear! There are some things that can be done to make it less time consuming!
One of the things to look at is in your Admin Console. You would go to Cache Manager and look for System Cache Processing Mode. By default this is set to Normal.
You would want to set this to Data Load Mode and then click on Change Cache Processing. It is worth noting that when using Data Load Mode, it will not update the tree but it will be faster to process because it is not updating the tree after every single update. Once the process is done, the tree can be rebuilt once and not after every update.
You don’t have to necessarily go into the Console to set that every time you are adding something into a hierarchy or making an update. If you have a workflow that is currently used to process your hierarchy inserts and/or updates, you can add a custom task to turn on Data Load Mode and then turn it back to Normal after your processing is complete.
To set it to Data Load Mode, in the custom task, you would set the class name to com.
To set it back to normal, in the custom task, you would set the class name to com.
For additional information regarding custom tasks, you can reference the following wiki:
doboski 310000SJR4 Visits (10102)
One very powerful aspect of TRIRIGA is the ability to configure it for your business needs. Of course the biggest drawbacks to TRIRIGA is the fact that it is configurable! OK, that’s a bit of an inside joke here in the IBM TRIRIGA Support team. We love that people can make TRIRIGA do what they need it to do, but if they don’t follow best practices, everyone suffers. The end users suffer slowness, the administrators try to figure out what has gone wrong, the IT team struggles with hardware and architecture configuration and the IBM Support team has to figure out what the implementer did that may be causing the problem – even though it’s probably outside the scope of support. Queries can be one of those things that can be impacting your performance.
So I want to specifically talk about custom queries and what you can do. Depending on how you create your queries, custom queries can have an impact on the performance of your system. In your custom query, when you create an Association Filter, you want to avoid using –Any for the module selection and All for the business object selection. The reason to avoid those particular selections for your Association Filter – is it can cause potential report performance issues This is mentioned in the document 3.4.2 Reporting User Guide found here: http
You are better off specifying a specific association type in your Association Filter than to use –Any.
If you have any fields that have a special character in them, like < >, &, * or - that can also impact your performance. While field names should not have < >, &* or – in them it could happen somehow. If any of your fields managed to have special characters put into them, then this is going to cause issue with your reports, because the Reports will not be able to be built. So it is a good idea to NOT use any special characters when creating your field names. If you have special characters in any of your field names, it will be best not to have it referenced in a query. Ideally, you will want to create field names that do not use a special character.
It is important note to remember that if you have modified your custom queries, that you need to clear the Query cache from the TRIRIGA Admin Console for the change to take effect.
You will find this performance tip and many more in section 7.2.2 of the TRIRIGA Performance Best Practices guide at the link below. Support will often refer clients who report poor performance to this guide before engaging in any troubleshooting.
Chris K 270004Y3TR Visits (8557)
Being on the IBM TRIRIGA Support team, I have seen my share of PMRs where the customer is reporting a performance issue. The intent of this blog is provide you with sufficient information so that you can at least be ahead of the ball when it comes down to determining the root cause of your IBM TRIRIGA performance issues.
First and foremost, I need to make it clear that performance can be affected by a wide range of components. Network latency, insufficient memory or CPUs on the key servers, older Java releases either being used by the application and process servers as well as on client machines, and the list goes on. We will focus only on the application in this blog post. Other components should be addressed by the appropriate IT groups like the network support, database support, and server support teams.
Second, you should review and understand the Best Practices for System Performance. The URL below will take you to a wiki page where you can get to a copy of the PDF document. Share this guide with your IT teams as it describes some detailed information regarding tuning of those other components. Ultimately, however, your IT teams may determine that their parts of the TRIRIGA infrastructure and optimally tuned. The rest of this blog post will deal with actions TRIRIGA administrators can take to identify bottlenecks and begin the process of tuning the application.
URL to Best Practices for System Performance:
Third, as a TRIRIGA Administrator, you should review performance on a regular basis. If your installation is used heavily, your performance tuning activities may need to be done on a weekly basis. If your TRIRIGA implementation is relatively small, then you might consider performance tuning activities on a monthly, quarterly, annually or some other basis. As the title of this blog post suggests, you should not simply assume that you need to do performance tuning when you first install the product and never do it again. As the database grows, performance may degrade. The application takes steps to minimize the degradation through the daily cleanup process, but that is only a small part of the tuning. Just as your car requires periodic maintenance to perform optimally, so to does the TRIRIGA application.
Your best tool for analyzing TRIRIGA performance is the performance log. In a future post, I will provide more detail on how to analyze the performance log. For now, I will simply provide instructions on how to get the data required for further analysis. Below is a step by step process to follow for gathering performance related information.
1) Login to the Admin Console
2) Click on Platform Logging in the Managed Objects section on the left side of the screen.
3) On the right side of the screen is a list of categories that have a hierarchical structure. Scroll down to the Performance Timings category.
4) Click on the check box immediately in front of Performance Timings. This will cause all of the subcategories to be checked.
5) Scroll down to the bottom of the page and click on the Apply button.
6) At this point, performance logging is turned on and the performance.log file should appear in the TRIRIGA directory structure in the log sub-directory.
7) Perform activities that users have indicated are poorly performing. This action, along with other actions taking place at the time of the testing, will get logged to the performance.log file.
8) Once you have completed the process of recreating the performance issue, log back into the Admin Console and turn off the performance logging.
9) Perform analysis of the resulting performance.log. (The details of this will be the subject of my next blog post.)
dmmckinn 1200006SCS Visits (10159)
Have you been looking for CLM-specific performance datasheets and sizing guides? Well, look no further. Below is a list of performance related articles that our CLM folks have pulled together and published on the
Deployment wiki on jazz.net.
The datasheets below are updated when performance testing shows a significant change. The 6.0 datasheets apply to the 6.0.1 and 6.0.2 releases. Note that there were major updates to the performance datasheets between the 5.x and 6.x releases.
A more complete list is available at Performance sizing guides and datasheets.
We sometimes hear in support, that TRIRIGA performance is slow. No other details are given. That doesn't help us out a lot. We need to know more, like what was going on at the time that? Is it impacting the entire system or just one area?
Performance of TRIRIGA is a bit complex. and there is rarely any one thing that can be done to improve performance. Performance can be impacted by hardware, network connectivity, software versions, queries, indexes, customizations, configurations and more. The answer to performance concerns is often solving some combination of these things. But some things can be reviewed to help point you into a direction where to look and what to do.
Some things like hardware and network connectivity are out of our control and need to be reviewed by your own IT department or a business partner to perform a health check or performance analysis on your system. We do provide a list of minimum hardware requirements that TRIRIGA should be running on in our installation guides. We also have a compatibility matrix to show you what configurations are supported for your particular version here: http
To help diagnose the problem, TRIRIGA has performance logs that can be enabled, retrieved and analyzed. There is a wiki page that describes the process of enabling the performance log files as well as analyzing them, which can be found here:
The wiki will walk you through how to enable the performance log and then analyze the output.
In case of performance concerns, TRIRIGA Administrators should ALWAYS review best practices to ensure they are following recommendations before entering a PMR. The wiki regarding Performance can be found here: http
Once you have the performance log, you can create the following pivot table to help identify where something is taking too long.
Generally, if something is taking longer than 10 milliseconds then it is taking too long to. For instance, in this example, you can look at the query for the report and see if it is optimized correctly. You may need to look at your database to see if anything needs to be adjusted at the database level.
It should be noted, that if you are using multiple servers in your environment, you would need to access the console from the server that is having the problem. If you have 3 UI JVM's and 2 process servers, where one of them is the workflow agent and you know you are having issues with workflow performance, then you would access the console from the server that has the WF agent running.
So using the performance log can help you identify what could be taking so long and if you enter a PMR that is something that can help us out as well. It is important to remember, that TRIRIGA Support is committed to every clients success, however; performance is not typically covered as part of the support agreement. Our goal will be to help point you in the right direction but since most performance inhibitors are unrelated to TRIRIGA, the support team cannot commit to resolving performance related issues. We may advise you but the resolutions are often up to you.