JeffLong 270005B0Q4 Visits (9409)
With the changes delivered in IBM Tririga Platform 3.5.1 there is now a better way to track changes to objects in Tririga and a new naming convention that has been introduced for Object Labels and Revisions.
Here is a great link providing information on this change for Tririga Platform and additional links to related resources:
doboski 310000SJR4 Visits (9862)
One very powerful aspect of TRIRIGA is the ability to configure it for your business needs. Of course the biggest drawbacks to TRIRIGA is the fact that it is configurable! OK, that’s a bit of an inside joke here in the IBM TRIRIGA Support team. We love that people can make TRIRIGA do what they need it to do, but if they don’t follow best practices, everyone suffers. The end users suffer slowness, the administrators try to figure out what has gone wrong, the IT team struggles with hardware and architecture configuration and the IBM Support team has to figure out what the implementer did that may be causing the problem – even though it’s probably outside the scope of support. Queries can be one of those things that can be impacting your performance.
So I want to specifically talk about custom queries and what you can do. Depending on how you create your queries, custom queries can have an impact on the performance of your system. In your custom query, when you create an Association Filter, you want to avoid using –Any for the module selection and All for the business object selection. The reason to avoid those particular selections for your Association Filter – is it can cause potential report performance issues This is mentioned in the document 3.4.2 Reporting User Guide found here: http
You are better off specifying a specific association type in your Association Filter than to use –Any.
If you have any fields that have a special character in them, like < >, &, * or - that can also impact your performance. While field names should not have < >, &* or – in them it could happen somehow. If any of your fields managed to have special characters put into them, then this is going to cause issue with your reports, because the Reports will not be able to be built. So it is a good idea to NOT use any special characters when creating your field names. If you have special characters in any of your field names, it will be best not to have it referenced in a query. Ideally, you will want to create field names that do not use a special character.
It is important note to remember that if you have modified your custom queries, that you need to clear the Query cache from the TRIRIGA Admin Console for the change to take effect.
You will find this performance tip and many more in section 7.2.2 of the TRIRIGA Performance Best Practices guide at the link below. Support will often refer clients who report poor performance to this guide before engaging in any troubleshooting.
IBM TRIRIGA Microsoft SQL Server database Best Practice recommendations to be reviewed by MS SQL Server DBA
Provide suggestions and recommendations for the IBM TRIRIGA MS SQL Server database environment. Also, provide monitoring and trouble resolution steps to quickly discover and prevent performance and instability degradation.
This document is in response to performance related issues occurring in the installation of the TRIRIGA Application. Key areas to address:-
MS SQL Server performance problems can be difficult to pinpoint and require inspection into many aspects of the environment. There is no magic button or specific step by step actions to locate every performance issue. What exist are guidelines for diagnosing and troubleshooting common performance problems. The purpose of this document will be to provide a general methodology for diagnosing and troubleshooting MS SQL Server database instability & performance problems in common scenarios.
Configuration, tuning and sizing issues for MS SQL Server may lead to various instability and performance issues from database end. The instructions listed below are to be reviewed, confirmed and applied by customer’s DBA whenever necessary.
TRIRIGA Customer Typical findings
There are basic best practices that should be implemented on a production MS SQL Servers.
SQL Server Best Practices
SQL Service: The account that runs the SQL Server Service should be included in the two Local Security Policy Groups below:
“Lock Pages in Memory” – Prevents SQL Server allocated memory from being swapped to disk.
“Perform volume maintenance tasks” - Enables Instant File Initialization, the system will not initialize storage it allocates. Instead, the storage will remain uninitialized until SQL Server writes data to it. Preventing the server from zero-filling storage.
1. Click Start.
2. Go to Administrative tools | Local Security Policy.
3. Click Local Policies Click User Rights Assignment.
4. Scroll down to “Lock pages in memory”, and double-click it.
5. Click Add User or Group, and enter the SQL Server service account name.
6. Scroll down to “Perform volume maintenance tasks”, and double-click it.
7. Click Add User or Group, and enter the SQL Server service account name.
Server Memory: Change “Max Server Memory” to coincide with the chart below. This setting controls how much memory can be used by the SQL Server Buffer Pool. If you don’t set an upper limit for this value, other parts of SQL Server, and the operating system can be starved for memory, which can cause instability and performance problems. These settings are for x64, on a dedicated database server, only running the DB engine,
“Max Degree Parallelism”: Set to 4. Currently set at 0, unlimited parallelism. Takes effect immediately. This will limit parallelism to 4 threads and decrease CXPACKET waits.
EXEC dbo.sp_configure 'show advanced options', 1;
RECONFIGURE WITH OVERRIDE;
EXEC dbo.sp_configure 'max degree of parallelism', 4;
RECONFIGURE WITH OVERRIDE;
TempDB: Add data Files to match number of cores. Running with a single tempdb data file can create unacceptable latch contention and I/O performance bottlenecks. To mitigate these problems, allocate one tempdb data file per processor core. (This is different than the practice for user-defined database files, where the recommendation is 0.5 to 1 data file per core.) Change Growth rate to a fixed increment. Turn on AutoStats.
--Simple script to generate Temp file SQL, change for your path. Creates 2GB files with 512MB Growth. (Generate results as text, copy, paste run. Takes affect after SQL restart)
set nocount on
declare @filename int
select @filename = '1'
while @filename <= 32
Select 'ALTER DATABASE [tempdb] ADD FILE ( NAME = N''tempdev' + conv
Select @filename = @filename + 1
In Windows 2008, networking should be set for “Maximize settings for networked applications”.
Database Best Practices
Autogrow: Change Autogrow on data and log files, increase growth amount to 500MB+ per file depending on rate of growth.
Auto Update Stats: Change to Async. Prevents SQL from waiting on stats to update.
The optimizer will initiate a statistics update operation when about 20% of the rows in an index have changed. Normally, whatever query was being optimized at the time will then have to wait until the statistics are updated before it can finish the optimization phase and begin executing. With the 'async' option, a separate thread will update the stats, and the query that was being optimized will continue and use the older stats when its plan is generated.
Update Statistics: Consider running on a nightly or weekly basis with fullscan.
IBM TRIRIGA specific recommendations
Increase Tridata data files to 0.5 to 1 data file per core.
All files to be exact same size and growth settings.
Suggestion to equalize the current server:
Create a new filegroup and add 4-8 files large enough to hold the largest table in Tridata.
Rebuild the table clustered index on the new filegroup with drop existing. This would allow SQL to stripe the table data and indexes across the newly created files without having to unload and load the database. Repeat as needed for similar groupings. Maybe another group for reference and configuration tables and so on until the Primary filegroup can be reduced.
Snapshot Isolation to reduce blocking:
Consider testing with Snapshot_isolation and read
ALTER DATABASE << My Database >>
ALTER DATABASE << My Database >>
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
SELECT EmployeeID, LastName, FirstName, Title
Keep a close watch on unused indexes and missing indexes.
, 'CREATE NONCLUSTERED INDEX ix_IndexName ON ' + sys.objects.name COLLATE DATABASE_DEFAULT + ' ( ' + IsNu
ELSE CASE WHEN mid.
ELSE ',' END + mid.
ELSE 'INCLUDE (' + mid.
INNER JOIN sys.
INNER JOIN sys.
INNER JOIN sys.objects WITH (nolock) ON mid.OBJECT_ID = sys.
WHERE (migs.group_handle IN
SELECT TOP (500) group_handle
ORDER BY (avg
ORDER BY 2 DESC , 3 DESC
SELECT o.name, indexname=i.name, i.index_id , reads=user_seeks + user_scans + user_lookups
, writes = user_updates , rows = (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id)
WHEN s.user_updates < 1 THEN 100
ELSE 1.00 * (s.user_seeks + s.user_scans + s.user_lookups) / s.user_updates
END AS reads_per_write
, 'DROP INDEX ' + QUOTENAME(i.name)
+ ' ON ' + QUOTENAME(c.name) + '.' + QUOT
INNER JOIN sys.indexes i ON i.index_id = s.index_id AND s.object_id = i.object_id
INNER JOIN sys.objects o on s.object_id = o.object_id
INNER JOIN sys.schemas c on o.schema_id = c.schema_id
AND s.database_id = DB_ID()
AND i.type_desc = 'nonclustered'
AND i.is_primary_key = 0
AND (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id) > 10000
ORDER BY reads
SQL Server Monitoring
Management Data Warehouse
The management data warehouse should be turned on for monitoring sql. This tool will provide baseline information and allow for timeline snapshots of performance data. Near Real-time performance reporting and query history.
SQL Black Box Trace
Consider turning on the MS SQL server black box trace to run at startup. This trace data can be viewed when blocking occurs without having to start a new trace. Two rolling 25MB files. This data can be reviewed in profiler and also sent off to application providers for further analysis.
SELECT * FROM fn_trace_getinfo (2);
SELECT * FROM fn_trace_gettable (
'C:\Program Files\Microsoft SQL Serv
Autostart Blackbox with two rolling logs:
CREATE PROCEDURE StartBlackBoxTrace
DECLARE @TraceId int
DECLARE @maxfilesize bigint
SET @maxfilesize = 25
@options = 8,
@tracefile = NULL,
@maxfilesize = @maxfilesize
EXEC sp_trace_setstatus @TraceId, 1
The topic of chasing SQL performance issues has been documented by various SQL “Gurus” but all source the same document. This was written for SQL 2005, but nothing about the methodology had changed. This Microsoft document contains the areas of concern, how to determine the cause, and possible resolutions.
Highlights from the documentation:
Discover CPU Bottleneck, investigate further if “run
scheduler_id < 255
What running the most and recompiling, start at the top and work down:
select top 25
cross apply sys.
order by plan_generation_num desc
Create and use the sp_block proc:
create proc dbo.sp_block (@spid bigint=NULL)
-- This stored procedure is provided "AS IS" with no warranties, and
-- confers no rights.
-- Use of included script samples are subject to the terms specified at
-- T. Davidson
-- This proc reports blocks
-- 1. optional parameter @spid
'blk object' = t1.r
sys.dm_tran_locks as t1,
Check the “Analyzing operational index statistics” section in the appendix for script to evaluate index usage.
Monitor for Excessive compilation and recompilation
SQL Server: SQL Statistics: Batch Requests/sec
SQL Server: SQL Statistics: SQL Compilations/sec
SQL Server: SQL Statistics: SQL Recompilations/sec
SQL Server: Buffer Manager object
Low Buffer cache hit ratio
Low Page life expectancy
High number of Checkpoint pages/sec
High number Lazy writes/sec
PhysicalDisk Object: Avg. Disk Queue Length represents the average number of physical read and write requests that were queued on the selected physical disk during the sampling period. If your I/O system is overloaded, more read/write operations will be waiting. If your disk queue length frequently exceeds a value of 2 during peak usage of SQL Server, then you might have an I/O bottleneck.
Avg. Disk Sec/Read is the average time, in seconds, of a read of data from the disk. Any number
Less than 10 ms - very good
Between 10 - 20 ms - okay
Between 20 - 50 ms - slow, needs attention
Greater than 50 ms – Serious I/O bottleneck
Avg. Disk Sec/Write is the average time, in seconds, of a write of data to the disk. Please refer to the guideline in the previous bullet.
Physical Disk: %Disk Time is the percentage of elapsed time that the selected disk drive was busy servicing read or write requests. A general guideline is that if this value is greater than 50 percent, it represents an I/O bottleneck.
Avg. Disk Reads/Sec is the rate of read operations on the disk. You need to make sure that this number is less than 85 percent of the disk capacity. The disk access time increases exponentially beyond 85 percent capacity.
Avg. Disk Writes/Sec is the rate of write operations on the disk. Make sure that this number is less than 85 percent of the disk capacity. The disk access time increases exponentially beyond 85 percent capacity.
Microsoft 3rd Party supporting Information
Troubleshooting Performance Problems in SQL Server - http
SQL 2008 Management Data Warehouse (performance monitoring) - http
SQL Server 2005 Waits and Queues & SQL Server Best Practices Article - http
Romain_Barth 2700076HKB Visits (9609)
By using DXL, however, you can do this task quickly. Here is a sample of code to help you perform it:
dmmckinn 1200006SCS Visits (8837)
Have you seen the Rational Team Concert 6.0.2 videos? The following videos provide information about features available in version 6.0.2. They were posted a few months ago but we thought it would be good to bring them back in the spotlight.
The full playlist which include all of the following videos is available in Coll
For further information about other capabilities available in CLM version 6.02, you may want to checkout Highlights of 6.0.2.
AcdntlPoet 2700019V2G Visits (10518)
Maximo JSON API: Getting Started -
And don't miss the other three follow on videos for more Maximo JSON API lessons and information:
Chris K 270004Y3TR Visits (8857)
The IBM TRIRIGA CAD Integrator tool is extremely helpful in many client environments. For many clients, the space records are created and managed via the CAD drawing which means that errors during the process could seriously hamper your productivity. Today I will review one particular error and describe how you can use the information in that error to start your own investigation into the issue.
Below is an example of an error that can occur during the Sync Full process in CAD Integrator. Values in the error have been changed to be generic, your error message will contain numbers specific to your database.
2016-05-20 11:04:51,738 ERROR [com
First, read the message so you can get a general idea of the problem. There are two key parts of the message which will give you a general idea of what is happening. You can see that the sync process failed. The information immediately after "Sync failed." provides information for the developer to identify what path within the code raised the error. After that path information is a more critical piece needed to do further analysis. Specifically, there is this in that error message: "Attach associated object could not find the association to use." From this information we can see that the sync failed because it appears that one of the objects is missing an expected association to another record.
Next, we are going to get into the details of the error.
The first value, SSSSSSSS, is actually the spec_id for a triSpace record. You can use that value to query the IBS_SPEC table to determine the name of the space. The SQL statement to find this information is:
select * from ibs_spec where spec_id = SSSSSSSS
This will display all of the fields in the IBS_SPEC table. Besides the name of the space, you will also be able to see the location information which will allow you to get to the underlying record in TRIRIGA.
The next value, PPPPPPPP, is the spec_id for the triPeople record. We can then reuse the select statement above by simply using the PPPPPPPP value rather than the SSSSSSSS value.
This will give you the information you need to look at the associations that are on the space record as well as the people record. If the Sync Full process does NOT fail on a different drawing, you could use a space record and associated person record from that drawing as a guide to determine the missing association. You should then be able to create the required association and run the Sync Full process again. It is important to note that the Sync Full process is more inclusive than the Sync Area process with regards to the various records associated to the space. If Sync Full fails and Sync Area does not, then you will need to review the error in the ci.log file as described in this blog post to determine the records that are causing the error to occur.
IBM TRIRIGA - Secure Sockets Layer (SSL) between the Tririga Application & Database is not supported
JeffLong 270005B0Q4 Visits (8587)
We were recently asked for guidance on setting up SSL (Secure Sockets Layer) between the Tririga Application and the Tririga Database. Although this may be technically possible, setting up SSL between the Tririga Application and the Tririga Database is not recommended and it is not supported by IBM TRIRIGA Support.
If you have a need for enhanced security for your IBM Tririga solution, please contact IBM TRIRIGA Support for assistance. We will work with you to offer supported solutions that meet your needs.
Fabio L Pinto 270003DRX7 Visits (9269)
(CK01) Third party considerations
IBM TRIRIGA support cannot help on 3rd party installation, configuration, setup and troubleshooting. Please, consult your 3rd party vendor support for help.
For other IBM products (IBM WebSphere & IHS - IBM HTTP Server), kindly route your questions & issues via PMR to the specific teams. Make sure you let this clear on the description that you want support for that specific product and properly classify that PMR.
(CK02) Sizing recommendations
IBM TRIRIGA support cannot help on specific sizing recommendations since this goes beyond support scope and it needs lots other relevant business information: business needs, business policies, business strategy, business resources, business growth plan, etc.
IBM TRIRIGA has the following Wiki page for some relevant sizing considerations when working with your sizing process:
If customers do need sizing recommendations, IBM TRIRIGA support needs to point customer to IBM Services (GBS) for addressing their request and needs. For these cases, please contact IBM & You (Sales) using the contact information available on this IBM page:
- Directory of worldwide contacts - http
(CK03) Preparing the environment - prerequisites
IBM TRIRIGA needs minimum environment requirements that may need to be reviewed in terms of adapting to business needs, growth and workload. Although a minimum required configuration and resources are required for running our product, this may not be enough depending and further sizing assessment needs to be engaged - See SU02 item above.
Kindly review the following IBM TRIRIGA documents for the hardware requirements:
- Overview of hardware configuration - http
Kindly review the following IBM TRIRIGA documents for the software requirements, meaning the compatibility requirements we need for 3.5.1 version:
(CK04) Upgrading the Platform (framework)
If you are upgrading from old Platform versions 2.5.x, 2.6.x, or 2.7.x, please refer to this document:
- Upgrade to TRIRIGA Platform 3.5.x (platform only) if upgrading from TRIRIGA Application Platform version 2.5.x, 2.6.x, or 2.7.x; and TRIRIGA application version 9.x - http
If you are upgrading from more recent versions (> 2.7.x), please refer to this document:
It is import keeping a good backup copy of your database before and after the upgrade process. There will be post upgrade tasks listed on the Upgrading guide, this includes checking up the installation and upgrade logs looking for errors and relevant warnings.
See that IBM TRIRIGA Platform upgrade or Fix Pack deployment has cumulative approach: this will comprehend all fixes and enhancements for the previous releases at time of this cut-off date. Check the respective Release Notes and/or the respective IBM TRIRIGA Fix Pack README files for more information:
For downloading the 3.5.1 required software, please review this link:
- IBM TRIRIGA 10.5.1 and IBM TRIRIGA Application Platform 3.5.1 Download Instructions - http
For downloading any available IBM TRIRIGA Fix Pack (FP) for your release, along with its respective README file (that will contain the FP install instructions), please review:
- How can I download a IBM TRIRIGA Platform Fix Pack? - http
About questions on IBM WebSphere regular product (ND & Single Alone) versus IBM WebSphere Liberty profile, see below:
i. Installation instructions for WebSphere Liberty Profile: "Installing TRIRIGA Application Platform on a WebSphere Application Server Liberty Core profile" - http
ii. Questions on how to choose: "Traditional WAS or Liberty: how to choose?" - http
iii. See that our IBM TRIRIGA is fully compatible with WebSphere Liberty, our Installer brings already its code and that will be only a check mark for installing WAS Liberty along with IBM TRIRIGA product. WAS Liberty brings High Availability (HA) possibilities (see SU07 item below), it is easier to implement and maintain and can support the most of the IBM TRIRIGA running, tuning and workload requirements scenarios, otherwise you do have a very specific business need in place (really rare scenario, see item ii above).
Keep IBM TRIRIGA support team aware of your upgrade and any relevant deployment (regarding IBM TRIRIGA solution, like other components and hardware changes). Send us a note (use any one of your IBM TRIRIGA support engineer who has previously worked with you) and provide us the following:
Q01) About any recent deployments (software / hardware) related to IBM TRIRIGA solution:
Q01-a) What is the from/to versions for the recent upgrade or deployment you've got?
Q02) About any planned deployments (software / hardware) related to IBM TRIRIGA solution:
Q02-a) What is the from/to versions going to be upgraded / deployed / changed?
With the set of information above, we will be able to know where you are, where have you been in terms of previous deployments, and where you intend to be in future, so that we can prepare ourselves to provide you support (for instance, on-call support during go-live process) when necessary. I'd suggest you to include questions & answers Inn, Q01 and Q02 in all PMRs you create from now on. This will speed up our support service cycle and provide you better assistance.
(CK05) Upgrading the IBM TRIRIGA Portfolio Data Manager (Application):
For IBM TRIRIGA Portfolio Data Manager (Application) upgrade, see this is NOT a cumulative process, meaning you do need to install each one of the available versions from your original one to the target one. More details about that, is available on the following IBM Support technote:
- What are the steps for upgrading IBM TRIRIGA Application (Portfolio) to recent versions? - http
Keep IBM TRIRIGA support team aware of your upgrade and any relevant deployment (regarding IBM TRIRIGA solution, like other components and hardware changes). Send us a note (use any one of your IBM TRIRIGA support engineer who has previously worked with you) and provide us the same set of questions & answers addressed above as Q01 and Q02 set on item SU04.
(CK06) Tuning your IBM TRIRIGA product - product requirements
It is a requirement customers tune their IBM TRIRIGA installs following the IBM TRIRIGA Best Practice for System Performance recommendations. Some of them are starting tuning set up and may be reviewed and changed due to some specific sizing concern or changes on system workload and/or topology.
There may be other required tuning, recommendations and assessments needed to be addressed for better performance and stability of your system.
All these recommendations and initial tuning information are addressed here:
(CK07) IBM TRIRIGA High Availability considerations
As part of your Capacity plan and Scalability assessments & process, you may need to change your topology for accommodating your business needs and required High Availability(HA). See the following IBM TRIRIGA support documents containing information about such subject:
- How can I have multiple IBM TRIRIGA application servers (user sessions) working on a Load Balancing and a High Availability implementation? - http
- How can I have multiple IBM TRIRIGA process servers (agents runs) working on a Load Balancing and a High Availability implementation? - http
You may want event to implement BROS (BIRT Report Only Server), meaning you want to have all your BIRT reports running on a separate server. See more information here:
- Advanced reporting in IBM TRIRIGA - http
See that even if you are using 3.5.1 product with WebSphere Liberty Profile, we do support High Availability for this implementation. See below:
- High Availability with Liberty - http
(CK08) IBM TRIRIGA SSO & Seamless Login information
IBM TRIRIGA product does support SSO and Seamless Login implementation, but the most of the required install and configuration is made on the 3rd party layer actually. See more information here:
- What is necessary to implement Single Sign-On (SSO) over IBM TRIRIGA product? - http
- Does IBM TRIRIGA support Seamless Sign On without challenging Internet Browser for credential information)? - http
- What are the methods of inserting the user name into an HTTP header supported by IBM TRIRIGA SSO-enabled environments? - http
- What are the available SSO properties for IBM TRIRIGA? - http
(CK09) IBM TRIRIGA TLS & SSL (HTTPS) support
IBM TRIRIGA supports TLS & SSL implementation, as properly documented here:
- Does IBM TRIRIGA support HTTPS, SSL and TLS? - http
But see this is not IBM TRIRIGA configuration and setup at all. This happens on the Application Server (WebSphere, WebLogic) and Web Server layer actually, not handled and controlled by our code at all. If you need support on that, contact your vendor support for such products.
Just as a sample of how we could be configuring TLS & SSL over WebSphere, please review the following (it is just a sample, as it is, your environment may have other requirements depending on the versions, topology and solutions you may have):
- Enable HTTPS in WebSphere for Maximo, SCCD, TSRM, and TRIRIGA - http
IBM TRIRIGA - When viewing associations on associations tab, the lines connecting objects are not solid.
The cause of this is that the rendering of Native SVG graphics is handled on the client side in the client's web browser, and different browsers render the Native Scalable Vector Graphics files in different ways. This can result in the same SVG file looking different depending on the browser chosen.
If you are reviewing Tririga associations and you are having trouble viewing them, try to find a different web browser that renders Native SVG in a way that better meets your requirements. If this is not an option, you can also contact support for the browser vendor for additional assistance.
IBM TRIRIGA - What are the differences between Fixpacks and Limited Availability Fixes and what are best practices for fixes?
JeffLong 270005B0Q4 Visits (9158)
Occasionally we have IBM TRIRIGA customers who need clarification on the differences between Fixpacks and Limited Availability fixes. I want to share this information with you and state best practice guidelines here:
GENERAL AVAILABILITY FIXPACK (GA FIX)
GA Fixpacks deliver product defect fixes that have undergone a full development release cycle and the most extensive QA testing of all maintenance releases.
These fixes are delivered for any issue reported either internally or externally regardless of severity. Fixpacks occasionally deliver minor functional enhancements and modifications to add or update supported platforms, browsers, databases, middleware, etc.
Fixpacks are cumulative and each new fixpack contains all fixes from all previous fixpacks/interim fixes for that release.
LIMITED AVAILABILITY FIX (LA FIX)
An LA Fix is an unofficial mechanism to deliver emergency fixes for severe product issues that cannot be delayed until the next regular maintenance delivery. LA Fixes also go by the names “1-off” or “1-off Hotfix” but they all mean a single APAR fix delivered directly to a customer from Support.
Conditions that may warrant an LA Fix
Risks associated with LA Fixes
BEST PRACTICES FOR FIXES
It is perfectly acceptable to take an LA FIX to address an issue when warranted. However, the risk associated with taking an LA FIX should always be weighed against the perceived benefits. If at all possible, it is always best to wait for a fully tested GA Fix. Also, if you do take an LA Fix, it should only remain in place until a GA Fix containing the fix needed is available. At that point, the GA Fix should be applied.
Chris K 270004Y3TR Visits (8318)
Being on the IBM TRIRIGA Support team, I have seen my share of PMRs where the customer is reporting a performance issue. The intent of this blog is provide you with sufficient information so that you can at least be ahead of the ball when it comes down to determining the root cause of your IBM TRIRIGA performance issues.
First and foremost, I need to make it clear that performance can be affected by a wide range of components. Network latency, insufficient memory or CPUs on the key servers, older Java releases either being used by the application and process servers as well as on client machines, and the list goes on. We will focus only on the application in this blog post. Other components should be addressed by the appropriate IT groups like the network support, database support, and server support teams.
Second, you should review and understand the Best Practices for System Performance. The URL below will take you to a wiki page where you can get to a copy of the PDF document. Share this guide with your IT teams as it describes some detailed information regarding tuning of those other components. Ultimately, however, your IT teams may determine that their parts of the TRIRIGA infrastructure and optimally tuned. The rest of this blog post will deal with actions TRIRIGA administrators can take to identify bottlenecks and begin the process of tuning the application.
URL to Best Practices for System Performance:
Third, as a TRIRIGA Administrator, you should review performance on a regular basis. If your installation is used heavily, your performance tuning activities may need to be done on a weekly basis. If your TRIRIGA implementation is relatively small, then you might consider performance tuning activities on a monthly, quarterly, annually or some other basis. As the title of this blog post suggests, you should not simply assume that you need to do performance tuning when you first install the product and never do it again. As the database grows, performance may degrade. The application takes steps to minimize the degradation through the daily cleanup process, but that is only a small part of the tuning. Just as your car requires periodic maintenance to perform optimally, so to does the TRIRIGA application.
Your best tool for analyzing TRIRIGA performance is the performance log. In a future post, I will provide more detail on how to analyze the performance log. For now, I will simply provide instructions on how to get the data required for further analysis. Below is a step by step process to follow for gathering performance related information.
1) Login to the Admin Console
2) Click on Platform Logging in the Managed Objects section on the left side of the screen.
3) On the right side of the screen is a list of categories that have a hierarchical structure. Scroll down to the Performance Timings category.
4) Click on the check box immediately in front of Performance Timings. This will cause all of the subcategories to be checked.
5) Scroll down to the bottom of the page and click on the Apply button.
6) At this point, performance logging is turned on and the performance.log file should appear in the TRIRIGA directory structure in the log sub-directory.
7) Perform activities that users have indicated are poorly performing. This action, along with other actions taking place at the time of the testing, will get logged to the performance.log file.
8) Once you have completed the process of recreating the performance issue, log back into the Admin Console and turn off the performance logging.
9) Perform analysis of the resulting performance.log. (The details of this will be the subject of my next blog post.)
Have you ever had an IBM TRIRIGA L2 support person say something like "I cannot recreate the problem in an As Delivered environment."? Have you ever wondered what "As Delivered" really means? If so, this is the blog post for you!
First, some fundamentals. IBM TRIRIGA is actually comprised of 2 components. There is the application version (eg 10.3, 10.3.1, 10.4, 10.4.2.1, etc.) and there is the platform version (eg 3.3, 3.3.1, 3.4, 3.4.1, etc.) It is important to understand the difference between the two versions. The platform version is the base upon which the application version is built. This is why you could not have a 10.4 application version running on a 3.3 platform. For a 10.4 application, the platform must at least be 3.4. The only exception to this is with regards to the 4th numeric position of the version number. It is possible that there is a fix pack for the application only and no changes were required of the platform to implement that change. Another way to keep the application and platform versions straight in your mind is to remember that you can customize the application, but you cannot customize the platform.
Second, there are several tests the L2 engineer may perform while trying to replicate an issue. The first thing is to test to see if the issue exists in a later application release. Often this means that we are also testing with a later platform release. As mentioned previously, the only exception is if there is a fix pack for the application only, no platform fix was required to implement the application fix. The L2 engineer should also check to see if they can recreate the problem using the same application version as you. This means that we may be testing with a platform release earlier than the one you have in your environment. This is where we start getting into the realm of the As Delivered terminology.
Suppose you have a problem with your application and you know that your base version is 10.3, You may have customized the application, but the basis of that customization was the 10.3 application version. Based on this scenario, the L2 engineer would test with an environment that is on the 10.3 application AND the 3.3 platform. This is what we mean when we say "As Delivered." The installer for the 10.3/3.3 IBM TRIRIGA Application Platform when installing a new environment would result in an "As Delivered" 10.3/3.3 environment. In you environment, you may have a 10.3 application, but you may have a 3.4 (or later) platform. If we could not replicate the problem in an As Delivered 10.3/3.3 environment, we would then check to see if we can replicate the problem in a 10.4/3.4 As Delivered environment. If we cannot replicate the problem in that environment, that would be a strong indicator that the problem may have been resolved in the 10.4 application code. If we can replicate the problem in that 10.4/3.4 As Delivered environment, then it is possible that the platform code introduced the problem.
Now that you know what we mean by "As Delivered", you might want to consider creating some small environments that represent an As Delivered state based on your application version as well as your platform version. In a future blog, I will talk about the importance of setting up these types of environments.
JohnONeill 270004JPGF Visits (6480)
There is a potential issue with the shutdown process for TRIRIGA. This is specific to versions prior to TRIRIGA 3.5.1 running under Windows with Websphere Liberty Profile (WLP). We have found that the SHUTDOWN.BAT file may not end the session. If you encounter this issue you can resolve it by adding the path as described below.
In TRIRIGA 3.5.1 for both the RUN.BAT and SHUTDOWN.BAT we include a step to change directory to the location of the server.bat file as below.
(Note: In notepad there are no spaces or carriage returns between the rows.)
@echo offset JAVA
It looks like this in other viewers
echo JAVA_HOME : %JAVA_HOME%
echo CLASSPATH : %CLASSPATH%
cd /d /hom
server.bat start tririgaServer
cd /d /hom
server.bat stop tririgaServer
However, in previous versions that path is not included:
TRIRIGA 3.4.2 (shutdown.bat)
server.bat stop tririgaServer
TRIRIGA 3.5 (shutdown.bat)
server.bat stop tririgaServer
If you encounter an issue where the shutdown.bat file is not working you can resolve the issue by adding the path to the server.bat file similar to the example below.
cd /d /your_path_to tririga/wlp/bin
server.bat stop tririgaServer
There you are, planning an application and platform upgrade for your IBM TRIRIGA installation and you think everything is working just fine. You roll out to production and for the first few days afterwards, all is quiet. Suddenly, out of the blue, you notice that the floor plan graphics no long render. Where the graphics should appear, you do not see the No Graphic Available message where the graphic used to appear. What you do notice is that, in the lower right hand corner "Error 2" is displayed and the rest of the space where the graphic would normally appear is blank. This blog post will address one of the potential causes of this problem and also a means of finding the problem before you upgrade your production environment.
In order to determine the root cause, we need to know what the errors are that are referenced in that "Error 2" text in the graphics section. Clicking on the text will open a pop-up box that contains the actual error messages (2, hence the "Error 2") related to the issue. That alone will not help the support team diagnose the problem. The errors should indicate an MID number. That MID number should also be found in the server log with a time stamp associated with when the graphic should have been rendered. In a issue I was working for a client, we determined from that information that there was a problem with the graphic layer configuration records. In particular, they were missing their corresponding IBS_SPEC entries. This required removing the records from the database using SQL run against the database back end. The application is unable to delete the records because the application relies heavily on the IBS_SPEC table entries to perform the actions necessary to delete the records. Once the records were removed and the layer configuration records rebuilt, the graphics once again started to display.
Great, that fixed the immediate problem, but how do you prevent this from happening in the future? When planning your upgrade, you need to insure that all graphical functionality is tested before making the decision to roll the upgrade out to your production environment. Upgrade plans should always include CAD Integrator testing. In older releases of the IBM CAD Integrator tool and the IBM TRIRIGA Platform, there was a great deal of backward compatibility. As a result of this backward compatibility, most clients did not include steps to insure that graphical elements were unaffected and, instead, focused on the business process logic to insure that the non-graphical data was correct. With the later releases of IBM TRIRIGA (3.3 or later, to be more precise) the CAD Integrator tool became much more tightly coupled with the platform release. Best practice for upgrade planning should now include upgrading the CAD Integrator tool to the release that was created specifically for the platform release to which you are upgrading. As a result of adding the CAD Integrator to your upgrade plans, you should naturally expect to add more time to the validation of all CAD related graphics that appear in your IBM TRIRIGA instance.
If you found this blog post useful, be sure to 'like' the post. If you have any comments about this blog, please provide that feedback in the comments section.