Inspections 7.6.1 - Reporting Multiple Inspection Results - The new Maximo 7.6.1 release of Inspections introduce the capability to report multiple inspections results at the same time, to better support operator rounds and sequenced inspection. Check it out!
First off, some background. I have been using and troubleshooting Data Integrator related issues for several years. It's a good tool, and quite handy for importing and updating data in the IBM TRIRIGA application. That being said, Support has received quite a bit of traffic with regard to using this tool and so I thought I would provide my take and open a dialogue for commentary.
The most pervasive issue I have seen overall has been problems with the source sheet. Typically, I start my Data Integrator process by building a spreadsheet with the column headers using the TRIRIGA application. Starting from the Data Integrator interface, reached from Home > Tools > Data Integrator interface using the Create Header action you can generate a base sheet. It's as easy as selecting the fields you want to use and exporting a sheet to begin working with. You also have the option to simply open Excel and type in the fields you want to use.
A known issue is encountered when copying and pasting from the application into Excel. In fact, this is one of the key points I want to make. Copying data from the TRIRIGA application or from any other tool, into Excel, will almost invariably introduce formatting into the spreadsheet. This is the most common cause of issues with the upload process. HTML formatting information will cause problems with the upload.
The method I have found that aids in getting around this restriction is to use the Copy/Paste VALUES option when pasting data into the spreadsheet. This removes the formatting tags, and allows for a clean upload. At times, I have experimented with copying and pasting an entire spreadsheet into a new sheet, again using the VALUES option, to clean up an upload sheet prior to saving as a text file. This yields good results and has solved many issues for me.
Another area where I have encountered issues in the past is when trying to make edits to the text file after exporting it from Excel. I strongly recommend that this NOT be done. If any edits are warranted, please make the changes in the spreadsheet and re-export the text file. In fact, I would recommend deleting the original text file and doing a fresh export each time any edits are made. This eliminates the possibility of bringing in bad data, or merging unexpected edits.
Source data can introduce issues as well. Flat file outputs from third-party software may not contain all of the data need for either upload or update. Missing column data can cause missing rows or mismatched data post-upload. Also, checking and verifying the input data can be tedious. If there is any doubt about the source data, I would recommend using one of the other integration methods where you can engage workflow to trap erroneous or missing data.
Because Data Integrator uses a flat file transfer methodology, there is very little in the way of error trapping involved, and there are no real trigger points for validation workflows to check the data prior to saving the data. If your source data is in doubt in any way, I have to recommend using Business Connect or Data Connect to bring your data in.
I welcome any comments or suggestions for extending this post, and Happy Integrating!
CLM 6.0.4 iFix007 was made available for download on February 6 This iFix contains fixes for 391 defects broken out for the following products:
For information about downloading this fix (and fixes available for other versions) please go here.
Would you like to be directly involved in efforts to improve our products? Would you like the ability to vote on improvements that you have a vested interest in seeing implemented? Well you can do just that from the IBM RFE Community. As part of the community you can collaborate with IBM development teams and other product users through your ability to search, view, comment on, submit, and track product requests.
Check out some of the hottest RFEs for our IoT products and cast your vote for those requests you would like to see implemented: Int
Watch this video to learn more: IBM
Rational Publishing Engine: Generating documents with JSON data from Atlassian JIRA - In this demo you’ll learn how to construct a URL to obtain JSON data from JIRA. You’ll then add the JSON schema in Rational Publishing Engine Document Studio and generate a document from a template designed to display JIRA issues. This video was recorded using IBM Rational Publishing Engine version 2.1.2.
For more information, see Integrating JIRA with Rational Publishing Engine: http
This is a often question from many TRIRIGA administrators. We do have parameters to limit the file extension for a while now, but for file size that was a missing capability, so far...
In 3.5.2 we introduced a new property in trir
This Request for Enhancement, or RFE, was created by one of our customers, and implemented by our development team as one of the most voted ones, so as always, we encourage you to open RFEs at will and send the URL to your peers for voting for it!
This is the URL for the RFE that granted such functionality: http
I hope you make good use of this feature once you upgrade to 3.5.2 and higher versions of the platform. I am sure your storage specialist will owe you one!
Maximo Migration Manager and Automation Scripts - How to use Migration Manager to migrate Automation Scripts. By Jaegee Ramos
There is a potential issue with the shutdown process for TRIRIGA. This is specific to versions prior to TRIRIGA 3.5.1 running under Windows with Websphere Liberty Profile (WLP). We have found that the SHUTDOWN.BAT file may not end the session. If you encounter this issue you can resolve it by adding the path as described below.
In TRIRIGA 3.5.1 for both the RUN.BAT and SHUTDOWN.BAT we include a step to change directory to the location of the server.bat file as below.
(Note: In notepad there are no spaces or carriage returns between the rows.)
@echo offset JAVA
It looks like this in other viewers
echo JAVA_HOME : %JAVA_HOME%
echo CLASSPATH : %CLASSPATH%
cd /d /hom
server.bat start tririgaServer
cd /d /hom
server.bat stop tririgaServer
However, in previous versions that path is not included:
TRIRIGA 3.4.2 (shutdown.bat)
server.bat stop tririgaServer
TRIRIGA 3.5 (shutdown.bat)
server.bat stop tririgaServer
If you encounter an issue where the shutdown.bat file is not working you can resolve the issue by adding the path to the server.bat file similar to the example below.
cd /d /your_path_to tririga/wlp/bin
server.bat stop tririgaServer
I am new to the IBM TRIRIGA Support organization, working as a Level 2 Support Engineer. It has been my observation when reviewing PMR's that there is a lot of time spent going back and forth between the customer and the support engineer. It seems to me that this is due, in part, because not enough information was initially provided. Many times I have seen, in a PMR, that the customer is getting some error. Sometimes they just say they are getting an error or they may report the specific error with no information about how it happened, what version they were using or what they were doing. Many times there are vague steps with our client thinking that TRIRIGA engineers should know what they are trying to do.
I wanted to share what happens in support so that clients might understand why information about a problem is so important. When support receives a PMR we try to reproduce the issue based on the information given to us. If we are not given enough information, we are forced to collect it by making requests that can take days or even weeks to accomplish. Time zones play a role where each email can take a full day to get to the right people and get a response. In some cases, if we are provided with not enough information we may fail to replicate the problem which does not mean it is not an issue, it just means we may have replicated incorrectly because we are missing information or there are configurations or customized workflows that we do not have. There could be something in the way that the customer is doing something versus how the support engineer is doing something, because with software, there can be more than one way of doing something. If we need to get additional information it just takes that much more time.
We recognize that your time is valuable and it can be frustrating going back and forth to get the necessary information to reproduce an issue. What would help us in IBM TRIRIGA Support, is when entering a PMR, clients provide detailed step by step instructions as if you were asking your non tech-savy grandmother to reproduce. It may sound corny but it really is all in the details. As I mentioned, there could be more than one way to do something and left to our own devices, we might not do it the same way as you (the customer), so the more details the better.
If you have ever cooked and followed a recipe you are following steps. You might think of that approach for entering your steps to reproduce an issue.
Your time is valuable and we recognize that. The more detailed you are with your initial entry on the PMR, the less time spent going back and forth trying to get the steps and more time can be spent on reproducing and resolving your issue.
Remember, we do have a document we often call a “Must Gather” or “Information To Collect” document for TRIRIGA PMRs. You should always submit this when you open a PMR. You can generally fill it out and save it so you always have it handy to attach to PMRs, just remember to update it when something changes. See it here:
In DOORS, there is no timeout feature. It is also not possible to use the Flexlm timeout (Abo
But it can be implemented in DXL with the "timer" DXL function:
DBE timer( DB dialogBox, int interval, string callBackFunction, string Description)
The tricky part is how to determine when was the last action of the user. For that, we can use the DOORS client logging. If the DOORS client log file has not been updated for 1 hour, we can conclude that the session is inactive and we can disconnect the user's session.
Find below a sample script that implements a timeout after 60 minutes of inactivity:
Creating a View in a Maximo Database - Create and use a view in Maximo by Ed Maxwell
Improve requirements management with IBM Rational DOORS Next Generation - Using office tools for requirements management is like using scissors to cut your lawn. Use the right tool with Rational DOORS Next Generation. Check out the Doors Next Generation free trial today and see how it has requirements management rede
IBM Rational DOORS Next Generation: Terminology and Basic Concepts - This introductory presentation focuses on basic concepts and terminology that one should know when working with IBM Rational DOORS Next Generation.
IBM Rational DOORS Next Generation Tour: Import, Edit, Trace, and Analyze Requirements - In this video, you will learn how to use Rational DOORS Next Generation to import and review requirements, add traceability links between those requirements, analyze the data, and then export it.
Check out the Doors Next Generation free trial today and see how it has requirements management rede
Sometimes a user that supposedly has licenses for a form or portal is not being able to see it, so how do I determine if some license is missing or if the licenses I have are enough? This question often comes up when users start reporting they can not access portions of the application and call in complaining about it.
The best way to see it is to log as an Admin user, for instance "system" and follow these instructions:
In the below example you see that a user accessing via Cloud needs license "IBM Facilities and Real Estate Management on Cloud Enterprise" to access the Contact Center (form triContactCenter).
If you have questions on specific licenses, contact your sales representative from IBM or Business Partner. They are able to see what you are entitled of.
Getting started with components in the RM and QM applications - In IBM Rational DOORS Next Generation (RM application) and IBM Rational Quality Manager (QM application), in projects that are enabled for configuration management, you can now create components to manage versions of data at a finer granularity than whole project areas. You might see components referred to as "fine-grained components." Instead of working with all the artifacts from the project in one stream (for example, a physical piece called a Handheld Meter Reader), you can create a component to represent a smaller collection of artifacts (for example, a smaller physical piece such as a Sensor in the handheld meter reader). Components are supported only in projects that have configuration management capabilities enabled.
Note: This feature is a technology preview and is not supported for use in a production environment. You should read and understand the key limitations that are listed in the
For more information on CLM topics, check out the IBM Rational Collaborative Application Lifecycle Management v4 playlist on Youtube below:
Many times, a client may hear a support engineer say that they should upgrade to the latest version. Why do I keep hearing that - especially if upgrading will take time, money, and resources? TRIRIGA, like all other software, evolves. We continuously fix defects and add new functionality. For instance, if you are on version 3.3.2, some of the features you cannot take advantage of include improved logging capabilities, which makes it easier for us to help you troubleshoot an issue. Also included is improved security and, more recently workflow versioning. Since complex software can sometimes have defects you never know when a defect might impact your business. But why wait for it to impact you? Upgrade so that it won’t happen. New functionality is also added into the software and you may want to take advantage of it.
Another reason to upgrade is that software uses various technologies, which changes faster than New England weather! Technology is constantly changing and TRIRIGA must keep up in order to keep it running. That technology can be in operating system updates, browser versions, application servers and Java to name a few. Support may often recommend staying current with product releases but it is always good to review the release notes for the current release. The release notes for each version show what has been fixed and functionality that has been added. You can find the release notes for TRIRIGA here:
Before upgrading, it is important to understand the structure of TRIRIGA because there are 2 very different procedures to upgrade. Those 2 structures are Platform and Application. The TRIRIGA Platform is the java code that IBM writes and is installed on a server. The TRIRIGA Applications are developed using the TRIRIGA Platform. Applications are not “written” but are developed using the Platform as a development tool. Applications are stored in the database as metadata and are not found in the TRIRIGA directory structure. Both the Platform and the Application have their own version number. How do I tell what number is for what? Platform versions are the lower of the 2 numbers associated with a TRIRIGA install. So the Platform version would be something like 3.5.1 or 3.4.2. Applications are the larger numbers associated with a TRIRIGA install. So the Application version would be something like 10.5.1 or 10.4.2. TRIRIGA 3.5.1/10.5.1 is Platform 3.5.1 and Application 10.5.1. But this can change. If IBM releases a 4.1.0/11.1.0 release, a client can upgrade to Platform 4.1.0 and leave the application at 10.5.1. But you can never upgrade the Application beyond the platform it was built on. So using the example of 4.1.0/11.1.0, you could not upgrade the application to 11.1.0 and still be on platform 3.5.1 because the functionality required to support the new Application exists in the new Platform.
The TRIRIGA Platform will always be there as it is required for the database to be run. The Platform never deprecates functionality. So if an application is developed on one release of the Platform, it will continue to function in future releases of the Platform. Ever since the 3.2 release, you can easily upgrade the Platform in under 2 hours. You simply stop the servers, install the new platform code, start one server to perform the database updates and when it is complete, stop the server, apply the latest fixpack, start one server and then start the rest. That’s it! The Platform will be upgraded. Security vulnerabilities, new technology, performance enhancements, new properties and more are ready to use. You should plan to perform a Platform upgrade at least once a year.
Applications can be substantially more complex. If you have never done one, I would strongly recommend that you consider engaging our IBM Global Business Service (GBS) or one of the IBM TRIRIGA certified business partners to help you through an application upgrade. Since the applications are actually data in the database, an upgrade involves updating data, which is always a tricky task. On top of that, clients have the ability to configure and modify functionality associated with an application. A wrong step could overwrite data and damage functionality. To add to it, application upgrades must be done one version at a time. If you on 10.3.1 and are going to 10.5.1, then you will need to upgrade to 10.3.2. Then 10.4.0, 10.4.1, 10.4.2, 10.5.0 and finally 10.5.1. We strongly recommend that you plan for an Application upgrade at least once every two years to minimize the number of versions between platform and application. In addition, audited functionality may require Application upgrades. Customers who use the Lease functionality in TRIRIGA will know that government rules around leases are called FASB. Clients who need to be compliant with FASB rules will need to be on the latest Application release.
Fixpacks are important too! If a defect is found in a release, we will identify the defect as an APAR and develop a fix that can be applied to the installed software. It is recommended that customers set aside time and resources once a quarter to apply any fixpacks.
When a support engineer recommends that the user should upgrade to the latest version, the first thing that should be done is plan! Just because the platform may not take a lot of time to do, you should put in the proper planning. Here are somethings to ask yourself when planning your upgrade:
IBM TRIRIGA Microsoft SQL Server database Best Practice recommendations to be reviewed by MS SQL Server DBA
Provide suggestions and recommendations for the IBM TRIRIGA MS SQL Server database environment. Also, provide monitoring and trouble resolution steps to quickly discover and prevent performance and instability degradation.
This document is in response to performance related issues occurring in the installation of the TRIRIGA Application. Key areas to address:-
MS SQL Server performance problems can be difficult to pinpoint and require inspection into many aspects of the environment. There is no magic button or specific step by step actions to locate every performance issue. What exist are guidelines for diagnosing and troubleshooting common performance problems. The purpose of this document will be to provide a general methodology for diagnosing and troubleshooting MS SQL Server database instability & performance problems in common scenarios.
Configuration, tuning and sizing issues for MS SQL Server may lead to various instability and performance issues from database end. The instructions listed below are to be reviewed, confirmed and applied by customer’s DBA whenever necessary.
TRIRIGA Customer Typical findings
There are basic best practices that should be implemented on a production MS SQL Servers.
SQL Server Best Practices
SQL Service: The account that runs the SQL Server Service should be included in the two Local Security Policy Groups below:
“Lock Pages in Memory” – Prevents SQL Server allocated memory from being swapped to disk.
“Perform volume maintenance tasks” - Enables Instant File Initialization, the system will not initialize storage it allocates. Instead, the storage will remain uninitialized until SQL Server writes data to it. Preventing the server from zero-filling storage.
1. Click Start.
2. Go to Administrative tools | Local Security Policy.
3. Click Local Policies Click User Rights Assignment.
4. Scroll down to “Lock pages in memory”, and double-click it.
5. Click Add User or Group, and enter the SQL Server service account name.
6. Scroll down to “Perform volume maintenance tasks”, and double-click it.
7. Click Add User or Group, and enter the SQL Server service account name.
Server Memory: Change “Max Server Memory” to coincide with the chart below. This setting controls how much memory can be used by the SQL Server Buffer Pool. If you don’t set an upper limit for this value, other parts of SQL Server, and the operating system can be starved for memory, which can cause instability and performance problems. These settings are for x64, on a dedicated database server, only running the DB engine,
“Max Degree Parallelism”: Set to 4. Currently set at 0, unlimited parallelism. Takes effect immediately. This will limit parallelism to 4 threads and decrease CXPACKET waits.
EXEC dbo.sp_configure 'show advanced options', 1;
RECONFIGURE WITH OVERRIDE;
EXEC dbo.sp_configure 'max degree of parallelism', 4;
RECONFIGURE WITH OVERRIDE;
TempDB: Add data Files to match number of cores. Running with a single tempdb data file can create unacceptable latch contention and I/O performance bottlenecks. To mitigate these problems, allocate one tempdb data file per processor core. (This is different than the practice for user-defined database files, where the recommendation is 0.5 to 1 data file per core.) Change Growth rate to a fixed increment. Turn on AutoStats.
--Simple script to generate Temp file SQL, change for your path. Creates 2GB files with 512MB Growth. (Generate results as text, copy, paste run. Takes affect after SQL restart)
set nocount on
declare @filename int
select @filename = '1'
while @filename <= 32
Select 'ALTER DATABASE [tempdb] ADD FILE ( NAME = N''tempdev' + conv
Select @filename = @filename + 1
In Windows 2008, networking should be set for “Maximize settings for networked applications”.
Database Best Practices
Autogrow: Change Autogrow on data and log files, increase growth amount to 500MB+ per file depending on rate of growth.
Auto Update Stats: Change to Async. Prevents SQL from waiting on stats to update.
The optimizer will initiate a statistics update operation when about 20% of the rows in an index have changed. Normally, whatever query was being optimized at the time will then have to wait until the statistics are updated before it can finish the optimization phase and begin executing. With the 'async' option, a separate thread will update the stats, and the query that was being optimized will continue and use the older stats when its plan is generated.
Update Statistics: Consider running on a nightly or weekly basis with fullscan.
IBM TRIRIGA specific recommendations
Increase Tridata data files to 0.5 to 1 data file per core.
All files to be exact same size and growth settings.
Suggestion to equalize the current server:
Create a new filegroup and add 4-8 files large enough to hold the largest table in Tridata.
Rebuild the table clustered index on the new filegroup with drop existing. This would allow SQL to stripe the table data and indexes across the newly created files without having to unload and load the database. Repeat as needed for similar groupings. Maybe another group for reference and configuration tables and so on until the Primary filegroup can be reduced.
Snapshot Isolation to reduce blocking:
Consider testing with Snapshot_isolation and read
ALTER DATABASE << My Database >>
ALTER DATABASE << My Database >>
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
SELECT EmployeeID, LastName, FirstName, Title
Keep a close watch on unused indexes and missing indexes.
, 'CREATE NONCLUSTERED INDEX ix_IndexName ON ' + sys.objects.name COLLATE DATABASE_DEFAULT + ' ( ' + IsNu
ELSE CASE WHEN mid.
ELSE ',' END + mid.
ELSE 'INCLUDE (' + mid.
INNER JOIN sys.
INNER JOIN sys.
INNER JOIN sys.objects WITH (nolock) ON mid.OBJECT_ID = sys.
WHERE (migs.group_handle IN
SELECT TOP (500) group_handle
ORDER BY (avg
ORDER BY 2 DESC , 3 DESC
SELECT o.name, indexname=i.name, i.index_id , reads=user_seeks + user_scans + user_lookups
, writes = user_updates , rows = (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id)
WHEN s.user_updates < 1 THEN 100
ELSE 1.00 * (s.user_seeks + s.user_scans + s.user_lookups) / s.user_updates
END AS reads_per_write
, 'DROP INDEX ' + QUOTENAME(i.name)
+ ' ON ' + QUOTENAME(c.name) + '.' + QUOT
INNER JOIN sys.indexes i ON i.index_id = s.index_id AND s.object_id = i.object_id
INNER JOIN sys.objects o on s.object_id = o.object_id
INNER JOIN sys.schemas c on o.schema_id = c.schema_id
AND s.database_id = DB_ID()
AND i.type_desc = 'nonclustered'
AND i.is_primary_key = 0
AND (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id) > 10000
ORDER BY reads
SQL Server Monitoring
Management Data Warehouse
The management data warehouse should be turned on for monitoring sql. This tool will provide baseline information and allow for timeline snapshots of performance data. Near Real-time performance reporting and query history.
SQL Black Box Trace
Consider turning on the MS SQL server black box trace to run at startup. This trace data can be viewed when blocking occurs without having to start a new trace. Two rolling 25MB files. This data can be reviewed in profiler and also sent off to application providers for further analysis.
SELECT * FROM fn_trace_getinfo (2);
SELECT * FROM fn_trace_gettable (
'C:\Program Files\Microsoft SQL Serv
Autostart Blackbox with two rolling logs:
CREATE PROCEDURE StartBlackBoxTrace
DECLARE @TraceId int
DECLARE @maxfilesize bigint
SET @maxfilesize = 25
@options = 8,
@tracefile = NULL,
@maxfilesize = @maxfilesize
EXEC sp_trace_setstatus @TraceId, 1
The topic of chasing SQL performance issues has been documented by various SQL “Gurus” but all source the same document. This was written for SQL 2005, but nothing about the methodology had changed. This Microsoft document contains the areas of concern, how to determine the cause, and possible resolutions.
Highlights from the documentation:
Discover CPU Bottleneck, investigate further if “run
scheduler_id < 255
What running the most and recompiling, start at the top and work down:
select top 25
cross apply sys.
order by plan_generation_num desc
Create and use the sp_block proc:
create proc dbo.sp_block (@spid bigint=NULL)
-- This stored procedure is provided "AS IS" with no warranties, and
-- confers no rights.
-- Use of included script samples are subject to the terms specified at
-- T. Davidson
-- This proc reports blocks
-- 1. optional parameter @spid
'blk object' = t1.r
sys.dm_tran_locks as t1,
Check the “Analyzing operational index statistics” section in the appendix for script to evaluate index usage.
Monitor for Excessive compilation and recompilation
SQL Server: SQL Statistics: Batch Requests/sec
SQL Server: SQL Statistics: SQL Compilations/sec
SQL Server: SQL Statistics: SQL Recompilations/sec
SQL Server: Buffer Manager object
Low Buffer cache hit ratio
Low Page life expectancy
High number of Checkpoint pages/sec
High number Lazy writes/sec
PhysicalDisk Object: Avg. Disk Queue Length represents the average number of physical read and write requests that were queued on the selected physical disk during the sampling period. If your I/O system is overloaded, more read/write operations will be waiting. If your disk queue length frequently exceeds a value of 2 during peak usage of SQL Server, then you might have an I/O bottleneck.
Avg. Disk Sec/Read is the average time, in seconds, of a read of data from the disk. Any number
Less than 10 ms - very good
Between 10 - 20 ms - okay
Between 20 - 50 ms - slow, needs attention
Greater than 50 ms – Serious I/O bottleneck
Avg. Disk Sec/Write is the average time, in seconds, of a write of data to the disk. Please refer to the guideline in the previous bullet.
Physical Disk: %Disk Time is the percentage of elapsed time that the selected disk drive was busy servicing read or write requests. A general guideline is that if this value is greater than 50 percent, it represents an I/O bottleneck.
Avg. Disk Reads/Sec is the rate of read operations on the disk. You need to make sure that this number is less than 85 percent of the disk capacity. The disk access time increases exponentially beyond 85 percent capacity.
Avg. Disk Writes/Sec is the rate of write operations on the disk. Make sure that this number is less than 85 percent of the disk capacity. The disk access time increases exponentially beyond 85 percent capacity.
Microsoft 3rd Party supporting Information
Troubleshooting Performance Problems in SQL Server - http
SQL 2008 Management Data Warehouse (performance monitoring) - http
SQL Server 2005 Waits and Queues & SQL Server Best Practices Article - http
IBM TRIRIGA - When viewing associations on associations tab, the lines connecting objects are not solid.
The cause of this is that the rendering of Native SVG graphics is handled on the client side in the client's web browser, and different browsers render the Native Scalable Vector Graphics files in different ways. This can result in the same SVG file looking different depending on the browser chosen.
If you are reviewing Tririga associations and you are having trouble viewing them, try to find a different web browser that renders Native SVG in a way that better meets your requirements. If this is not an option, you can also contact support for the browser vendor for additional assistance.
Have you ever had an IBM TRIRIGA L2 support person say something like "I cannot recreate the problem in an As Delivered environment."? Have you ever wondered what "As Delivered" really means? If so, this is the blog post for you!
First, some fundamentals. IBM TRIRIGA is actually comprised of 2 components. There is the application version (eg 10.3, 10.3.1, 10.4, 10.4.2.1, etc.) and there is the platform version (eg 3.3, 3.3.1, 3.4, 3.4.1, etc.) It is important to understand the difference between the two versions. The platform version is the base upon which the application version is built. This is why you could not have a 10.4 application version running on a 3.3 platform. For a 10.4 application, the platform must at least be 3.4. The only exception to this is with regards to the 4th numeric position of the version number. It is possible that there is a fix pack for the application only and no changes were required of the platform to implement that change. Another way to keep the application and platform versions straight in your mind is to remember that you can customize the application, but you cannot customize the platform.
Second, there are several tests the L2 engineer may perform while trying to replicate an issue. The first thing is to test to see if the issue exists in a later application release. Often this means that we are also testing with a later platform release. As mentioned previously, the only exception is if there is a fix pack for the application only, no platform fix was required to implement the application fix. The L2 engineer should also check to see if they can recreate the problem using the same application version as you. This means that we may be testing with a platform release earlier than the one you have in your environment. This is where we start getting into the realm of the As Delivered terminology.
Suppose you have a problem with your application and you know that your base version is 10.3, You may have customized the application, but the basis of that customization was the 10.3 application version. Based on this scenario, the L2 engineer would test with an environment that is on the 10.3 application AND the 3.3 platform. This is what we mean when we say "As Delivered." The installer for the 10.3/3.3 IBM TRIRIGA Application Platform when installing a new environment would result in an "As Delivered" 10.3/3.3 environment. In you environment, you may have a 10.3 application, but you may have a 3.4 (or later) platform. If we could not replicate the problem in an As Delivered 10.3/3.3 environment, we would then check to see if we can replicate the problem in a 10.4/3.4 As Delivered environment. If we cannot replicate the problem in that environment, that would be a strong indicator that the problem may have been resolved in the 10.4 application code. If we can replicate the problem in that 10.4/3.4 As Delivered environment, then it is possible that the platform code introduced the problem.
Now that you know what we mean by "As Delivered", you might want to consider creating some small environments that represent an As Delivered state based on your application version as well as your platform version. In a future blog, I will talk about the importance of setting up these types of environments.
There you are, planning an application and platform upgrade for your IBM TRIRIGA installation and you think everything is working just fine. You roll out to production and for the first few days afterwards, all is quiet. Suddenly, out of the blue, you notice that the floor plan graphics no long render. Where the graphics should appear, you do not see the No Graphic Available message where the graphic used to appear. What you do notice is that, in the lower right hand corner "Error 2" is displayed and the rest of the space where the graphic would normally appear is blank. This blog post will address one of the potential causes of this problem and also a means of finding the problem before you upgrade your production environment.
In order to determine the root cause, we need to know what the errors are that are referenced in that "Error 2" text in the graphics section. Clicking on the text will open a pop-up box that contains the actual error messages (2, hence the "Error 2") related to the issue. That alone will not help the support team diagnose the problem. The errors should indicate an MID number. That MID number should also be found in the server log with a time stamp associated with when the graphic should have been rendered. In a issue I was working for a client, we determined from that information that there was a problem with the graphic layer configuration records. In particular, they were missing their corresponding IBS_SPEC entries. This required removing the records from the database using SQL run against the database back end. The application is unable to delete the records because the application relies heavily on the IBS_SPEC table entries to perform the actions necessary to delete the records. Once the records were removed and the layer configuration records rebuilt, the graphics once again started to display.
Great, that fixed the immediate problem, but how do you prevent this from happening in the future? When planning your upgrade, you need to insure that all graphical functionality is tested before making the decision to roll the upgrade out to your production environment. Upgrade plans should always include CAD Integrator testing. In older releases of the IBM CAD Integrator tool and the IBM TRIRIGA Platform, there was a great deal of backward compatibility. As a result of this backward compatibility, most clients did not include steps to insure that graphical elements were unaffected and, instead, focused on the business process logic to insure that the non-graphical data was correct. With the later releases of IBM TRIRIGA (3.3 or later, to be more precise) the CAD Integrator tool became much more tightly coupled with the platform release. Best practice for upgrade planning should now include upgrading the CAD Integrator tool to the release that was created specifically for the platform release to which you are upgrading. As a result of adding the CAD Integrator to your upgrade plans, you should naturally expect to add more time to the validation of all CAD related graphics that appear in your IBM TRIRIGA instance.
If you found this blog post useful, be sure to 'like' the post. If you have any comments about this blog, please provide that feedback in the comments section.