ScottRuch 270004QNFJ Visits (9453)
When starting as a new user or working with a new copy of the application, there are some dependencies within the application that need to be satisfied if you want to do anything more than create a user record. In short, a newly created application has a good bit of data in it, but you will need to add more to begin using all aspects of it.
Do you want to add a Location? You'll need a Geography to assign to it.
Do you want to add an Organization? You'll need a Primary Location.
Do you want to add People? You'll want a Primary Location and Organization
The best place to start is in the Portfolio menu. Geographies, Locations, Organizations, and People form the basic building blocks for adding record data.
For Geographies, It's helpful to add at least two complete branches, one in North America and one for Europe. A basic scenario would begin with World Regions for North America and Europe and continue to City level for each branch. This allows for flexibility in scenarios involving multiple time zones, moves and other time and place based events.
2 - Locations
For Locations, in following the North America/Europe theme, it is sufficient to create two complete Location branches, much like Geography. As above, create complete branches beginning with Property and working all the way down to at least one Space record for each branch. This allows for flexibility in Space area calculation. Moves, and can also be used for Requests.
3 - Organization
There is not typically a need for as complex a structure for Organization. It is certainly possible and some scenarios will likely require it. but for basic testing purposes, a single My Company record, and a single External Company will be sufficient.
4 - People
As with the Organization structure, the users needs will dictate what should be created. Two users for the My Company record, and two users for the External Company will allow for basic testing. Remember that unlike the other records mentioned here, People have dependencies for Licenses and Security.
The linked IBM TRIRIGA Quick Start Guide will be of use.
Finally, I hope to open a dialogue on this topic so if you see this post, and you have a question please do not hesitate to ask, and we will provide you answers.
Many times, a client may hear a support engineer say that they should upgrade to the latest version. Why do I keep hearing that - especially if upgrading will take time, money, and resources? TRIRIGA, like all other software, evolves. We continuously fix defects and add new functionality. For instance, if you are on version 3.3.2, some of the features you cannot take advantage of include improved logging capabilities, which makes it easier for us to help you troubleshoot an issue. Also included is improved security and, more recently workflow versioning. Since complex software can sometimes have defects you never know when a defect might impact your business. But why wait for it to impact you? Upgrade so that it won’t happen. New functionality is also added into the software and you may want to take advantage of it.
Another reason to upgrade is that software uses various technologies, which changes faster than New England weather! Technology is constantly changing and TRIRIGA must keep up in order to keep it running. That technology can be in operating system updates, browser versions, application servers and Java to name a few. Support may often recommend staying current with product releases but it is always good to review the release notes for the current release. The release notes for each version show what has been fixed and functionality that has been added. You can find the release notes for TRIRIGA here:
Before upgrading, it is important to understand the structure of TRIRIGA because there are 2 very different procedures to upgrade. Those 2 structures are Platform and Application. The TRIRIGA Platform is the java code that IBM writes and is installed on a server. The TRIRIGA Applications are developed using the TRIRIGA Platform. Applications are not “written” but are developed using the Platform as a development tool. Applications are stored in the database as metadata and are not found in the TRIRIGA directory structure. Both the Platform and the Application have their own version number. How do I tell what number is for what? Platform versions are the lower of the 2 numbers associated with a TRIRIGA install. So the Platform version would be something like 3.5.1 or 3.4.2. Applications are the larger numbers associated with a TRIRIGA install. So the Application version would be something like 10.5.1 or 10.4.2. TRIRIGA 3.5.1/10.5.1 is Platform 3.5.1 and Application 10.5.1. But this can change. If IBM releases a 4.1.0/11.1.0 release, a client can upgrade to Platform 4.1.0 and leave the application at 10.5.1. But you can never upgrade the Application beyond the platform it was built on. So using the example of 4.1.0/11.1.0, you could not upgrade the application to 11.1.0 and still be on platform 3.5.1 because the functionality required to support the new Application exists in the new Platform.
The TRIRIGA Platform will always be there as it is required for the database to be run. The Platform never deprecates functionality. So if an application is developed on one release of the Platform, it will continue to function in future releases of the Platform. Ever since the 3.2 release, you can easily upgrade the Platform in under 2 hours. You simply stop the servers, install the new platform code, start one server to perform the database updates and when it is complete, stop the server, apply the latest fixpack, start one server and then start the rest. That’s it! The Platform will be upgraded. Security vulnerabilities, new technology, performance enhancements, new properties and more are ready to use. You should plan to perform a Platform upgrade at least once a year.
Applications can be substantially more complex. If you have never done one, I would strongly recommend that you consider engaging our IBM Global Business Service (GBS) or one of the IBM TRIRIGA certified business partners to help you through an application upgrade. Since the applications are actually data in the database, an upgrade involves updating data, which is always a tricky task. On top of that, clients have the ability to configure and modify functionality associated with an application. A wrong step could overwrite data and damage functionality. To add to it, application upgrades must be done one version at a time. If you on 10.3.1 and are going to 10.5.1, then you will need to upgrade to 10.3.2. Then 10.4.0, 10.4.1, 10.4.2, 10.5.0 and finally 10.5.1. We strongly recommend that you plan for an Application upgrade at least once every two years to minimize the number of versions between platform and application. In addition, audited functionality may require Application upgrades. Customers who use the Lease functionality in TRIRIGA will know that government rules around leases are called FASB. Clients who need to be compliant with FASB rules will need to be on the latest Application release.
Fixpacks are important too! If a defect is found in a release, we will identify the defect as an APAR and develop a fix that can be applied to the installed software. It is recommended that customers set aside time and resources once a quarter to apply any fixpacks.
When a support engineer recommends that the user should upgrade to the latest version, the first thing that should be done is plan! Just because the platform may not take a lot of time to do, you should put in the proper planning. Here are somethings to ask yourself when planning your upgrade:
SUPPORT NOTIFICATION for non-browser TRIRIGA clients such as CAD Integrator, BIM, and Microsoft Outlook add-in
JeffLong 270005B0Q4 Visits (10449)
IBM TRIRIGA does not support SAML (Security Assertion Markup Language) or credential-less login mechanisms such as SmartCard or CAC (Common Access Card) as a method of authentication for its non-browser clients such as CAD Integrator, BIM, and the Microsoft Outlook add-in.
SSO solutions need to provide a mechanism for basic authentication as per the documentation in the "Requirements for single sign-on requests in the TRIRIGA Application Platform" for non-browser clients. SAML and SmartCard or CAC do not support basic authentication for non-browser based clients.
The best practice if using SAML or SmartCard/CAC, is to authenticate directly to Tririga on a separate process server or integration server as opposed to the SSO enabled application server. (NOTE: These users will need to know thier Tririga user name and password to sign in using this solution.)
An alternative best practice would be to set up a separate non-SAML SSO solution for non-browser client users which can support basic or NTLM authentication. (NOTE: SmartCard/CAC users would need to know their SmartCard/CAC user name and password to sign in using this solution.)
First off, some background. I have been using and troubleshooting Data Integrator related issues for several years. It's a good tool, and quite handy for importing and updating data in the IBM TRIRIGA application. That being said, Support has received quite a bit of traffic with regard to using this tool and so I thought I would provide my take and open a dialogue for commentary.
The most pervasive issue I have seen overall has been problems with the source sheet. Typically, I start my Data Integrator process by building a spreadsheet with the column headers using the TRIRIGA application. Starting from the Data Integrator interface, reached from Home > Tools > Data Integrator interface using the Create Header action you can generate a base sheet. It's as easy as selecting the fields you want to use and exporting a sheet to begin working with. You also have the option to simply open Excel and type in the fields you want to use.
A known issue is encountered when copying and pasting from the application into Excel. In fact, this is one of the key points I want to make. Copying data from the TRIRIGA application or from any other tool, into Excel, will almost invariably introduce formatting into the spreadsheet. This is the most common cause of issues with the upload process. HTML formatting information will cause problems with the upload.
The method I have found that aids in getting around this restriction is to use the Copy/Paste VALUES option when pasting data into the spreadsheet. This removes the formatting tags, and allows for a clean upload. At times, I have experimented with copying and pasting an entire spreadsheet into a new sheet, again using the VALUES option, to clean up an upload sheet prior to saving as a text file. This yields good results and has solved many issues for me.
Another area where I have encountered issues in the past is when trying to make edits to the text file after exporting it from Excel. I strongly recommend that this NOT be done. If any edits are warranted, please make the changes in the spreadsheet and re-export the text file. In fact, I would recommend deleting the original text file and doing a fresh export each time any edits are made. This eliminates the possibility of bringing in bad data, or merging unexpected edits.
Source data can introduce issues as well. Flat file outputs from third-party software may not contain all of the data need for either upload or update. Missing column data can cause missing rows or mismatched data post-upload. Also, checking and verifying the input data can be tedious. If there is any doubt about the source data, I would recommend using one of the other integration methods where you can engage workflow to trap erroneous or missing data.
Because Data Integrator uses a flat file transfer methodology, there is very little in the way of error trapping involved, and there are no real trigger points for validation workflows to check the data prior to saving the data. If your source data is in doubt in any way, I have to recommend using Business Connect or Data Connect to bring your data in.
I welcome any comments or suggestions for extending this post, and Happy Integrating!
JohnONeill 270004JPGF Visits (5835)
We often get questions regarding our plans for End of Support (EOS) for TRIRIGA versions. I thought I’d share some general information and a couple of useful links that will help customers and Business Partners with planning their future activities.
We typically announce an EOS date 5 years after the Release. Please note that this is not a guarantee, commitment or a promise. It is a guideline that we try to follow. There are sometimes exceptions that require retirement of a version before the originally expected date. This can occur due to technology changes outside of our control, security issues or other reasons. When this happens we make that announcement as soon as possible.
Please see the TRIRIGA Supported Versions information at the following link for the latest information.
For general convenience here is the link to the IBM Software Support Lifecycle site. Here you can search for similar information for any IBM product.
There is a potential issue with the shutdown process for TRIRIGA. This is specific to versions prior to TRIRIGA 3.5.1 running under Windows with Websphere Liberty Profile (WLP). We have found that the SHUTDOWN.BAT file may not end the session. If you encounter this issue you can resolve it by adding the path as described below.
In TRIRIGA 3.5.1 for both the RUN.BAT and SHUTDOWN.BAT we include a step to change directory to the location of the server.bat file as below.
(Note: In notepad there are no spaces or carriage returns between the rows.)
@echo offset JAVA
It looks like this in other viewers
echo JAVA_HOME : %JAVA_HOME%
echo CLASSPATH : %CLASSPATH%
cd /d /hom
server.bat start tririgaServer
cd /d /hom
server.bat stop tririgaServer
However, in previous versions that path is not included:
TRIRIGA 3.4.2 (shutdown.bat)
server.bat stop tririgaServer
TRIRIGA 3.5 (shutdown.bat)
server.bat stop tririgaServer
If you encounter an issue where the shutdown.bat file is not working you can resolve the issue by adding the path to the server.bat file similar to the example below.
cd /d /your_path_to tririga/wlp/bin
server.bat stop tririgaServer
Romain_Barth 2700076HKB Visits (6277)
Is there a way to delete multiple linksets from a link module including all the contained links?
The DOORS UI does not provide that feature but it is possible by using DXL.
Here is a sample that will delete all the linksets and links contained in a specific project:
First, open your link module in Exclusive Edit mode, then run the script above.
JeffLong 270005B0Q4 Visits (9257)
With the changes delivered in IBM Tririga Platform 3.5.1 there is now a better way to track changes to objects in Tririga and a new naming convention that has been introduced for Object Labels and Revisions.
Here is a great link providing information on this change for Tririga Platform and additional links to related resources:
doboski 310000SJR4 Visits (9741)
One very powerful aspect of TRIRIGA is the ability to configure it for your business needs. Of course the biggest drawbacks to TRIRIGA is the fact that it is configurable! OK, that’s a bit of an inside joke here in the IBM TRIRIGA Support team. We love that people can make TRIRIGA do what they need it to do, but if they don’t follow best practices, everyone suffers. The end users suffer slowness, the administrators try to figure out what has gone wrong, the IT team struggles with hardware and architecture configuration and the IBM Support team has to figure out what the implementer did that may be causing the problem – even though it’s probably outside the scope of support. Queries can be one of those things that can be impacting your performance.
So I want to specifically talk about custom queries and what you can do. Depending on how you create your queries, custom queries can have an impact on the performance of your system. In your custom query, when you create an Association Filter, you want to avoid using –Any for the module selection and All for the business object selection. The reason to avoid those particular selections for your Association Filter – is it can cause potential report performance issues This is mentioned in the document 3.4.2 Reporting User Guide found here: http
You are better off specifying a specific association type in your Association Filter than to use –Any.
If you have any fields that have a special character in them, like < >, &, * or - that can also impact your performance. While field names should not have < >, &* or – in them it could happen somehow. If any of your fields managed to have special characters put into them, then this is going to cause issue with your reports, because the Reports will not be able to be built. So it is a good idea to NOT use any special characters when creating your field names. If you have special characters in any of your field names, it will be best not to have it referenced in a query. Ideally, you will want to create field names that do not use a special character.
It is important note to remember that if you have modified your custom queries, that you need to clear the Query cache from the TRIRIGA Admin Console for the change to take effect.
You will find this performance tip and many more in section 7.2.2 of the TRIRIGA Performance Best Practices guide at the link below. Support will often refer clients who report poor performance to this guide before engaging in any troubleshooting.
IBM TRIRIGA Microsoft SQL Server database Best Practice recommendations to be reviewed by MS SQL Server DBA
Provide suggestions and recommendations for the IBM TRIRIGA MS SQL Server database environment. Also, provide monitoring and trouble resolution steps to quickly discover and prevent performance and instability degradation.
This document is in response to performance related issues occurring in the installation of the TRIRIGA Application. Key areas to address:-
MS SQL Server performance problems can be difficult to pinpoint and require inspection into many aspects of the environment. There is no magic button or specific step by step actions to locate every performance issue. What exist are guidelines for diagnosing and troubleshooting common performance problems. The purpose of this document will be to provide a general methodology for diagnosing and troubleshooting MS SQL Server database instability & performance problems in common scenarios.
Configuration, tuning and sizing issues for MS SQL Server may lead to various instability and performance issues from database end. The instructions listed below are to be reviewed, confirmed and applied by customer’s DBA whenever necessary.
TRIRIGA Customer Typical findings
There are basic best practices that should be implemented on a production MS SQL Servers.
SQL Server Best Practices
SQL Service: The account that runs the SQL Server Service should be included in the two Local Security Policy Groups below:
“Lock Pages in Memory” – Prevents SQL Server allocated memory from being swapped to disk.
“Perform volume maintenance tasks” - Enables Instant File Initialization, the system will not initialize storage it allocates. Instead, the storage will remain uninitialized until SQL Server writes data to it. Preventing the server from zero-filling storage.
1. Click Start.
2. Go to Administrative tools | Local Security Policy.
3. Click Local Policies Click User Rights Assignment.
4. Scroll down to “Lock pages in memory”, and double-click it.
5. Click Add User or Group, and enter the SQL Server service account name.
6. Scroll down to “Perform volume maintenance tasks”, and double-click it.
7. Click Add User or Group, and enter the SQL Server service account name.
Server Memory: Change “Max Server Memory” to coincide with the chart below. This setting controls how much memory can be used by the SQL Server Buffer Pool. If you don’t set an upper limit for this value, other parts of SQL Server, and the operating system can be starved for memory, which can cause instability and performance problems. These settings are for x64, on a dedicated database server, only running the DB engine,
“Max Degree Parallelism”: Set to 4. Currently set at 0, unlimited parallelism. Takes effect immediately. This will limit parallelism to 4 threads and decrease CXPACKET waits.
EXEC dbo.sp_configure 'show advanced options', 1;
RECONFIGURE WITH OVERRIDE;
EXEC dbo.sp_configure 'max degree of parallelism', 4;
RECONFIGURE WITH OVERRIDE;
TempDB: Add data Files to match number of cores. Running with a single tempdb data file can create unacceptable latch contention and I/O performance bottlenecks. To mitigate these problems, allocate one tempdb data file per processor core. (This is different than the practice for user-defined database files, where the recommendation is 0.5 to 1 data file per core.) Change Growth rate to a fixed increment. Turn on AutoStats.
--Simple script to generate Temp file SQL, change for your path. Creates 2GB files with 512MB Growth. (Generate results as text, copy, paste run. Takes affect after SQL restart)
set nocount on
declare @filename int
select @filename = '1'
while @filename <= 32
Select 'ALTER DATABASE [tempdb] ADD FILE ( NAME = N''tempdev' + conv
Select @filename = @filename + 1
In Windows 2008, networking should be set for “Maximize settings for networked applications”.
Database Best Practices
Autogrow: Change Autogrow on data and log files, increase growth amount to 500MB+ per file depending on rate of growth.
Auto Update Stats: Change to Async. Prevents SQL from waiting on stats to update.
The optimizer will initiate a statistics update operation when about 20% of the rows in an index have changed. Normally, whatever query was being optimized at the time will then have to wait until the statistics are updated before it can finish the optimization phase and begin executing. With the 'async' option, a separate thread will update the stats, and the query that was being optimized will continue and use the older stats when its plan is generated.
Update Statistics: Consider running on a nightly or weekly basis with fullscan.
IBM TRIRIGA specific recommendations
Increase Tridata data files to 0.5 to 1 data file per core.
All files to be exact same size and growth settings.
Suggestion to equalize the current server:
Create a new filegroup and add 4-8 files large enough to hold the largest table in Tridata.
Rebuild the table clustered index on the new filegroup with drop existing. This would allow SQL to stripe the table data and indexes across the newly created files without having to unload and load the database. Repeat as needed for similar groupings. Maybe another group for reference and configuration tables and so on until the Primary filegroup can be reduced.
Snapshot Isolation to reduce blocking:
Consider testing with Snapshot_isolation and read
ALTER DATABASE << My Database >>
ALTER DATABASE << My Database >>
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
SELECT EmployeeID, LastName, FirstName, Title
Keep a close watch on unused indexes and missing indexes.
, 'CREATE NONCLUSTERED INDEX ix_IndexName ON ' + sys.objects.name COLLATE DATABASE_DEFAULT + ' ( ' + IsNu
ELSE CASE WHEN mid.
ELSE ',' END + mid.
ELSE 'INCLUDE (' + mid.
INNER JOIN sys.
INNER JOIN sys.
INNER JOIN sys.objects WITH (nolock) ON mid.OBJECT_ID = sys.
WHERE (migs.group_handle IN
SELECT TOP (500) group_handle
ORDER BY (avg
ORDER BY 2 DESC , 3 DESC
SELECT o.name, indexname=i.name, i.index_id , reads=user_seeks + user_scans + user_lookups
, writes = user_updates , rows = (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id)
WHEN s.user_updates < 1 THEN 100
ELSE 1.00 * (s.user_seeks + s.user_scans + s.user_lookups) / s.user_updates
END AS reads_per_write
, 'DROP INDEX ' + QUOTENAME(i.name)
+ ' ON ' + QUOTENAME(c.name) + '.' + QUOT
INNER JOIN sys.indexes i ON i.index_id = s.index_id AND s.object_id = i.object_id
INNER JOIN sys.objects o on s.object_id = o.object_id
INNER JOIN sys.schemas c on o.schema_id = c.schema_id
AND s.database_id = DB_ID()
AND i.type_desc = 'nonclustered'
AND i.is_primary_key = 0
AND (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id) > 10000
ORDER BY reads
SQL Server Monitoring
Management Data Warehouse
The management data warehouse should be turned on for monitoring sql. This tool will provide baseline information and allow for timeline snapshots of performance data. Near Real-time performance reporting and query history.
SQL Black Box Trace
Consider turning on the MS SQL server black box trace to run at startup. This trace data can be viewed when blocking occurs without having to start a new trace. Two rolling 25MB files. This data can be reviewed in profiler and also sent off to application providers for further analysis.
SELECT * FROM fn_trace_getinfo (2);
SELECT * FROM fn_trace_gettable (
'C:\Program Files\Microsoft SQL Serv
Autostart Blackbox with two rolling logs:
CREATE PROCEDURE StartBlackBoxTrace
DECLARE @TraceId int
DECLARE @maxfilesize bigint
SET @maxfilesize = 25
@options = 8,
@tracefile = NULL,
@maxfilesize = @maxfilesize
EXEC sp_trace_setstatus @TraceId, 1
The topic of chasing SQL performance issues has been documented by various SQL “Gurus” but all source the same document. This was written for SQL 2005, but nothing about the methodology had changed. This Microsoft document contains the areas of concern, how to determine the cause, and possible resolutions.
Highlights from the documentation:
Discover CPU Bottleneck, investigate further if “run
scheduler_id < 255
What running the most and recompiling, start at the top and work down:
select top 25
cross apply sys.
order by plan_generation_num desc
Create and use the sp_block proc:
create proc dbo.sp_block (@spid bigint=NULL)
-- This stored procedure is provided "AS IS" with no warranties, and
-- confers no rights.
-- Use of included script samples are subject to the terms specified at
-- T. Davidson
-- This proc reports blocks
-- 1. optional parameter @spid
'blk object' = t1.r
sys.dm_tran_locks as t1,
Check the “Analyzing operational index statistics” section in the appendix for script to evaluate index usage.
Monitor for Excessive compilation and recompilation
SQL Server: SQL Statistics: Batch Requests/sec
SQL Server: SQL Statistics: SQL Compilations/sec
SQL Server: SQL Statistics: SQL Recompilations/sec
SQL Server: Buffer Manager object
Low Buffer cache hit ratio
Low Page life expectancy
High number of Checkpoint pages/sec
High number Lazy writes/sec
PhysicalDisk Object: Avg. Disk Queue Length represents the average number of physical read and write requests that were queued on the selected physical disk during the sampling period. If your I/O system is overloaded, more read/write operations will be waiting. If your disk queue length frequently exceeds a value of 2 during peak usage of SQL Server, then you might have an I/O bottleneck.
Avg. Disk Sec/Read is the average time, in seconds, of a read of data from the disk. Any number
Less than 10 ms - very good
Between 10 - 20 ms - okay
Between 20 - 50 ms - slow, needs attention
Greater than 50 ms – Serious I/O bottleneck
Avg. Disk Sec/Write is the average time, in seconds, of a write of data to the disk. Please refer to the guideline in the previous bullet.
Physical Disk: %Disk Time is the percentage of elapsed time that the selected disk drive was busy servicing read or write requests. A general guideline is that if this value is greater than 50 percent, it represents an I/O bottleneck.
Avg. Disk Reads/Sec is the rate of read operations on the disk. You need to make sure that this number is less than 85 percent of the disk capacity. The disk access time increases exponentially beyond 85 percent capacity.
Avg. Disk Writes/Sec is the rate of write operations on the disk. Make sure that this number is less than 85 percent of the disk capacity. The disk access time increases exponentially beyond 85 percent capacity.
Microsoft 3rd Party supporting Information
Troubleshooting Performance Problems in SQL Server - http
SQL 2008 Management Data Warehouse (performance monitoring) - http
SQL Server 2005 Waits and Queues & SQL Server Best Practices Article - http
Romain_Barth 2700076HKB Visits (9458)
By using DXL, however, you can do this task quickly. Here is a sample of code to help you perform it:
dmmckinn 1200006SCS Visits (8717)
Have you seen the Rational Team Concert 6.0.2 videos? The following videos provide information about features available in version 6.0.2. They were posted a few months ago but we thought it would be good to bring them back in the spotlight.
The full playlist which include all of the following videos is available in Coll
For further information about other capabilities available in CLM version 6.02, you may want to checkout Highlights of 6.0.2.
AcdntlPoet 2700019V2G Visits (10275)
Maximo JSON API: Getting Started -
And don't miss the other three follow on videos for more Maximo JSON API lessons and information:
Chris K 270004Y3TR Visits (8764)
The IBM TRIRIGA CAD Integrator tool is extremely helpful in many client environments. For many clients, the space records are created and managed via the CAD drawing which means that errors during the process could seriously hamper your productivity. Today I will review one particular error and describe how you can use the information in that error to start your own investigation into the issue.
Below is an example of an error that can occur during the Sync Full process in CAD Integrator. Values in the error have been changed to be generic, your error message will contain numbers specific to your database.
2016-05-20 11:04:51,738 ERROR [com
First, read the message so you can get a general idea of the problem. There are two key parts of the message which will give you a general idea of what is happening. You can see that the sync process failed. The information immediately after "Sync failed." provides information for the developer to identify what path within the code raised the error. After that path information is a more critical piece needed to do further analysis. Specifically, there is this in that error message: "Attach associated object could not find the association to use." From this information we can see that the sync failed because it appears that one of the objects is missing an expected association to another record.
Next, we are going to get into the details of the error.
The first value, SSSSSSSS, is actually the spec_id for a triSpace record. You can use that value to query the IBS_SPEC table to determine the name of the space. The SQL statement to find this information is:
select * from ibs_spec where spec_id = SSSSSSSS
This will display all of the fields in the IBS_SPEC table. Besides the name of the space, you will also be able to see the location information which will allow you to get to the underlying record in TRIRIGA.
The next value, PPPPPPPP, is the spec_id for the triPeople record. We can then reuse the select statement above by simply using the PPPPPPPP value rather than the SSSSSSSS value.
This will give you the information you need to look at the associations that are on the space record as well as the people record. If the Sync Full process does NOT fail on a different drawing, you could use a space record and associated person record from that drawing as a guide to determine the missing association. You should then be able to create the required association and run the Sync Full process again. It is important to note that the Sync Full process is more inclusive than the Sync Area process with regards to the various records associated to the space. If Sync Full fails and Sync Area does not, then you will need to review the error in the ci.log file as described in this blog post to determine the records that are causing the error to occur.
IBM TRIRIGA - Secure Sockets Layer (SSL) between the Tririga Application & Database is not supported
JeffLong 270005B0Q4 Visits (8478)
We were recently asked for guidance on setting up SSL (Secure Sockets Layer) between the Tririga Application and the Tririga Database. Although this may be technically possible, setting up SSL between the Tririga Application and the Tririga Database is not recommended and it is not supported by IBM TRIRIGA Support.
If you have a need for enhanced security for your IBM Tririga solution, please contact IBM TRIRIGA Support for assistance. We will work with you to offer supported solutions that meet your needs.