I am new to the IBM TRIRIGA Support organization, working as a Level 2 Support Engineer. It has been my observation when reviewing PMR's that there is a lot of time spent going back and forth between the customer and the support engineer. It seems to me that this is due, in part, because not enough information was initially provided. Many times I have seen, in a PMR, that the customer is getting some error. Sometimes they just say they are getting an error or they may report the specific error with no information about how it happened, what version they were using or what they were doing. Many times there are vague steps with our client thinking that TRIRIGA engineers should know what they are trying to do.
I wanted to share what happens in support so that clients might understand why information about a problem is so important. When support receives a PMR we try to reproduce the issue based on the information given to us. If we are not given enough information, we are forced to collect it by making requests that can take days or even weeks to accomplish. Time zones play a role where each email can take a full day to get to the right people and get a response. In some cases, if we are provided with not enough information we may fail to replicate the problem which does not mean it is not an issue, it just means we may have replicated incorrectly because we are missing information or there are configurations or customized workflows that we do not have. There could be something in the way that the customer is doing something versus how the support engineer is doing something, because with software, there can be more than one way of doing something. If we need to get additional information it just takes that much more time.
We recognize that your time is valuable and it can be frustrating going back and forth to get the necessary information to reproduce an issue. What would help us in IBM TRIRIGA Support, is when entering a PMR, clients provide detailed step by step instructions as if you were asking your non tech-savy grandmother to reproduce. It may sound corny but it really is all in the details. As I mentioned, there could be more than one way to do something and left to our own devices, we might not do it the same way as you (the customer), so the more details the better.
If you have ever cooked and followed a recipe you are following steps. You might think of that approach for entering your steps to reproduce an issue.
Your time is valuable and we recognize that. The more detailed you are with your initial entry on the PMR, the less time spent going back and forth trying to get the steps and more time can be spent on reproducing and resolving your issue.
Remember, we do have a document we often call a “Must Gather” or “Information To Collect” document for TRIRIGA PMRs. You should always submit this when you open a PMR. You can generally fill it out and save it so you always have it handy to attach to PMRs, just remember to update it when something changes. See it here:
G.G.Dexter 2700008NN4 Visits (8427)
IBM continually strives to seek new and better ways to improve the support experience we provide. With that in mind, we are pleased to announce the upcoming improvements to our support model for our On-Premise products. When these offerings are migrated to the new Support Community, Support Portal or Service Request will redirect you to it where you can log in using your existing IBM ID & password. For your convenience a "Provide Feedback" link has been provided at the top & bottom of the page.
You can find a list of common FAQs for your offerings in the following technotes:
We hope you enjoy the enhancements of this new IBM support experience and welcome your feedback.
doboski 310000SJR4 Visits (9586)
In this day in age, security is a very hot topic and as soon as one vulnerability pops up, it is addressed and mitigated, another one is found. It is a vicious circle of identifying and addressing that does not seem to let up. In our fixpack release notes, information regarding mitigation of vulnerabilities that were addressed without an APAR is listed. And sometimes, a vulnerability could be addressed as an APAR.
The reason I am mentioning security vulnerabilities is that sometimes, when they are resolved, there is an effect that impacts existing functionality and it may not always be clear. Sometimes, the result of fixing these vulnerabilities can “change” functionality.
As an example, in the 3.5.2 release, there is mention of an APAR related to external URL navigation items will now open in a new window to avoid cross origin scripting vulnerabilities. Prior to the 3.5.2 release, if you used an external URL in the navigation, it just opened in the same window. We have seen some issues where clients wanted the original design, but that is no longer possible since the change was made as a result of fixing a security vulnerability. The current behavior is correct and cannot revert to the old design. So in this case, there was an APAR referenced. But in others, there may not be. You can look at the 188.8.131.52 release notes (found here http
As the product develops and security vulnerabilities are found and addressed, it could mean a change in how something works. Reading the release notes can be a source of information but it may not always be clear why something changed. We all know change is hard, especially when we are so used to it working a certain way. I don’t know about you, but if the change was made to address a security vulnerability, I can live with that and accept the change.
GiuCS 270003E2P0 Visits (5656)
Well, you get TRIRIGA 3.4.2 or later to install and see how cool to use the Liberty Profile for WebSphere Application Server (WAS). This lets you install both the web server and TRIRIGA at one strike and self deploys itself from TRIRIGA installer. What an easy deal!
After installing TRIRIGA and the WAS Liberty profile, the user logged into the service and running it cannot log out or the service will be stopped. Many businesses do not allow an Admin user to be logged in indefinitely.
Now you're thinking you have to install all over again in a WAS deployed first. You double check and see WAS Liberty cannot run as a service as a product limitation. What a drag...
Well, there is one workaround you can try. Using the Apache's daemon you can create a Windows service. The following instructions are an idea for how you can implement it, but it depends on your architecture and installation:
Note: If you run into any issues, check the logs in the LogPath directory location for any errors - the errors as listed in the event viewer logs from Windows do not give meaningful errors.
There is an RFE to implement this and I encourage you to vote for it:
Pavan Hoskeri 270001V4PG Visits (8866)
In my e
However, I hadn't yet signed up for Bluemix. The sign up process was very smooth and it didn't require a credit card to get started with the 30 days trial If you already have an IBM ID, you can use the same for signing up with Bluemix.
Here are the quick steps which I performed for getting the IoTF Boilerplate added to my Bluemix app:
1.> Login to Bluemix account – click on “Create a space”
Please use a unique name for your space. If you use a name for the space which already exists, the wizard updates you about the same.
2.> Once the space was created, scrolled through to Applications – “CREATE AN APP” and chose to create a web app.
3.> There are a set of Boilerplates which help us experience the power of Bluemix with the most minimal additional work being required to be done by the end user.
Please ensure that the Region is selected as US South for the IoTF Boilerplate to be available for selection.
Select “Internet of Things Foundation Starter IBM” Boilerplate and use “SDK for Node.js™”
4.> Once the application is created, we would have the Routes URL for the application, clicking on the same would take us to the Node-RED for Internet of Things landing page
Node-RED provides a browser-based editor that makes it easy to wire together flows that can be deployed to the runtime in a single click. The version running here has been customized for the IBM Internet of Things Foundation.
It’s strongly recommended to secure your Node-RED flow editor with a username and password, as otherwise anyone who can guess the URL of this application will be able to launch the flow editor and access your IoT device data
5.> By default, this is the information that you’d see in the Node-RED flow editor:
The flow of events is generally from left to right i.e. you’d have your input nodes on the left and output at the right side of the editor window.
6.> Double click on the “IBM IoT App in” input node. This would bring up the Edit node window. Keep all the values as default and the only input that you’d need to provide is the “Device Id”
This Device Id is the value that you can get from the top right corner of the simulated device. if you've read my earlier blog, the Device Id that had been assigned to the simulated device is “CC:BA:99:12:B7:62” and this is what you'd use as the value in the input node.
7.> Once the device id has been entered, click on the “Deploy” icon on the right corner of the Node-RED Editor. If you’ve entered the correct device id, the deployment should be successful and you’d start seeing the messages in the “debug” output pane on the right of the Node-RED editor window.
8.> You can analyse that based on the temperature of the simulated device, the output debug prints out whether the temperature is within safe limits or critical.
9.> There is a switch which is inserted which has been configured. When the temperature from the simulated device arrives, if the temperature is less than 40, it’s routed to output # 1 which has a debug/output node added to display that the temperature is within safe limits. If the temperature is greater than 40 then it goes to output # 2 which has a debug/output node added which displays that the temperature is critical.
You can play around with the various options that the Node-RED Editor provides for input, output, functions etc. The standard example has the output messages sent to debug output nodes, you can replace them with any of the provided social media nodes. For example, you can send a Tweet to an authorized Twitter account if the Temperature goes beyond a certain level, so that corrective actions can be taken.
The above demo is just the tip of the iceberg. Given the immense amount of features and flexibility that IBM Bluemix and IBM IoTF provide, I would say its upto an individual’s creativity and skills on how best they would like to leverage the power of these platforms for building innovative applications quickly and efficiently!
mccarron 060001KYG9 Visits (5037)
Are you new to the IBM Support Community and managing your support cases opened with IBM? Would you like to have the ability to see case updates in your email from support?
A new feature was recently implemented that will allow you to see your case updates directly in your email. You no longer have to log into the community to see your case updates. To do this go to "My Settings" (located in the upper right hand corner of the community) and under Case Notification Settings select the following:
See Case Email Notifications for further details.
Anisat Simmons 060000HJHP Visits (13160)
IBM Watson IoT Support Lifecycle Resources
IBM provides advance notification of End of Support (EOS) dates allowing customers reasonable time to complete software upgrades or to refresh application products. EOS announcements are made in April and Sept.
Announcement letter dates are U.S. only. Information for other country announcements is available on the IBM Offering Information page. Select the date to view the announcement letter. Note that some product versions may not have online announcement letters.
View all IBM Software EOS announcements for 2016 and 2017.
This section describes some of the standard and enhanced IBM Software Support Lifecycle Policies and common questions. Additional details and answers to commonly asked questions regarding the Support Lifecycle Policy can be found on our Frequently Asked Questions page.
Q: What are the major Support Lifecycle milestones?
A: The major Support Lifecycle milestones are:
Q: How do you determine if your installed software is still supported?
A: Search by product name or keyword using the Supp
Q: What happens when EOS is announced?
A: Often, there is a newer version of the software available for download. In most cases, you’ll have sufficient time to plan for and install the latest version. For more information on the lifecycle stages, including EOS, view this short YouTube video on the IBM
Q: What is the standard version format for IBM Software products?
A: The full product version is expressed by a four-digit code known as the IBM Version, Release, Modification and Fix Level structure, or VRMF. View this Technote for additional information and description of each element. You may also find this Glossary of product support and maintenance terms helpful.
Q: How can you connect with Watson IoT on social media?
Q: Where can you find more information on IBM Support policies?
A: You can view and download the IBM
IBM TRIRIGA - What are the differences between Fixpacks and Limited Availability Fixes and what are best practices for fixes?
JeffLong 270005B0Q4 Visits (9038)
Occasionally we have IBM TRIRIGA customers who need clarification on the differences between Fixpacks and Limited Availability fixes. I want to share this information with you and state best practice guidelines here:
GENERAL AVAILABILITY FIXPACK (GA FIX)
GA Fixpacks deliver product defect fixes that have undergone a full development release cycle and the most extensive QA testing of all maintenance releases.
These fixes are delivered for any issue reported either internally or externally regardless of severity. Fixpacks occasionally deliver minor functional enhancements and modifications to add or update supported platforms, browsers, databases, middleware, etc.
Fixpacks are cumulative and each new fixpack contains all fixes from all previous fixpacks/interim fixes for that release.
LIMITED AVAILABILITY FIX (LA FIX)
An LA Fix is an unofficial mechanism to deliver emergency fixes for severe product issues that cannot be delayed until the next regular maintenance delivery. LA Fixes also go by the names “1-off” or “1-off Hotfix” but they all mean a single APAR fix delivered directly to a customer from Support.
Conditions that may warrant an LA Fix
Risks associated with LA Fixes
BEST PRACTICES FOR FIXES
It is perfectly acceptable to take an LA FIX to address an issue when warranted. However, the risk associated with taking an LA FIX should always be weighed against the perceived benefits. If at all possible, it is always best to wait for a fully tested GA Fix. Also, if you do take an LA Fix, it should only remain in place until a GA Fix containing the fix needed is available. At that point, the GA Fix should be applied.
JeffLong 270005B0Q4 Visits (6827)
We have had a few customers contact us because they could not start or stop Tririga on IBM WebSphere WAS Liberty Profile. We found that the way to resolve the issue was to implicitly run the batch file as Administrator on Windows OS.
If you have a need to restart Tririga on WAS Liberty Profile in Windows you must first find your tririga_root install path. If you used the default for the installer, this should be C:\IBM\Tririga. If you are not sure of the location, you can search for the following files (these should be in the Tririga \bin directory.): "run.bat" and "shutdown.bat"
To stop Tririga, navigate to the trir
Alternatively, on Windows servers, you can open a command prompt and run the command to shut down the Liberty profile: trir
To start Tririga back up, navigate to the trir
Alternatively, on Windows servers, you can open a command prompt and run the command to start the Liberty profile. trir
If you need further assistance please contact IBM Tririga support.
We sometimes hear in support, that TRIRIGA performance is slow. No other details are given. That doesn't help us out a lot. We need to know more, like what was going on at the time that? Is it impacting the entire system or just one area?
Performance of TRIRIGA is a bit complex. and there is rarely any one thing that can be done to improve performance. Performance can be impacted by hardware, network connectivity, software versions, queries, indexes, customizations, configurations and more. The answer to performance concerns is often solving some combination of these things. But some things can be reviewed to help point you into a direction where to look and what to do.
Some things like hardware and network connectivity are out of our control and need to be reviewed by your own IT department or a business partner to perform a health check or performance analysis on your system. We do provide a list of minimum hardware requirements that TRIRIGA should be running on in our installation guides. We also have a compatibility matrix to show you what configurations are supported for your particular version here: http
To help diagnose the problem, TRIRIGA has performance logs that can be enabled, retrieved and analyzed. There is a wiki page that describes the process of enabling the performance log files as well as analyzing them, which can be found here:
The wiki will walk you through how to enable the performance log and then analyze the output.
In case of performance concerns, TRIRIGA Administrators should ALWAYS review best practices to ensure they are following recommendations before entering a PMR. The wiki regarding Performance can be found here: http
Once you have the performance log, you can create the following pivot table to help identify where something is taking too long.
Generally, if something is taking longer than 10 milliseconds then it is taking too long to. For instance, in this example, you can look at the query for the report and see if it is optimized correctly. You may need to look at your database to see if anything needs to be adjusted at the database level.
It should be noted, that if you are using multiple servers in your environment, you would need to access the console from the server that is having the problem. If you have 3 UI JVM's and 2 process servers, where one of them is the workflow agent and you know you are having issues with workflow performance, then you would access the console from the server that has the WF agent running.
So using the performance log can help you identify what could be taking so long and if you enter a PMR that is something that can help us out as well. It is important to remember, that TRIRIGA Support is committed to every clients success, however; performance is not typically covered as part of the support agreement. Our goal will be to help point you in the right direction but since most performance inhibitors are unrelated to TRIRIGA, the support team cannot commit to resolving performance related issues. We may advise you but the resolutions are often up to you.
Sizing recommendations for planning migration of data from Rational DOORS to Rational DOORS Next Generation
paulellis 270001KTVW Visits (11062)
Are you considering migrating your data from Rational DOORS to Rational DOORS Next Generation(RDNG)?
Are you looking for guidance and best practice before you begin to be successful first time?
If so, then your starting point is definitely the detailed guidance on developerWorks on how to Migr
New Sizing and Best Practice Guide on Jazz.net's Deployment wiki
Since the developerWorks article was published, we realized that more detailed sizing information was required prior to executing the migration of data packages from DOORS 9 to RDNG. We collated pertinent sizing information from the Rati
There are significant improvements to the import timings with RDNG 6.0.2 release, so please refer to this article if you are evaluating an existing or future migration as this could indeed be an influencing factor.
The document details:
Sizing Guidelines for Rational DOORS Next Generation 6.x
Recommendations on the maximum sizes of your modules, projects and repositories so as to maximize your success when importing your packages and working in the future within RDNG.
The considerations for hardware are simplified from guidance published elsewhere in the Deployment wiki, but here they are explained within the context of how to plan for your new world.
Guidance on how to convert your Rational DOORS modules prior to migration
Invariably there will be modules and projects within DOORS 9 which will not match up to the guidance prescribed for RDNG. Use this section to understand how to easily manipulate your data before migrating.
What if the data to be migrated exceeds the recommendations?
The guidance is clearly aimed for the general use cases and is very much our strongest recommendation.
It is understood that there are very large enterprise requirements management estates out there. It is recommended that you contact IBM if this applies to you.
JeffLong 270005B0Q4 Visits (7829)
If you are getting an error message that states, “Current browser does not support native SVG.” Internet Explorer 11 64bit (IE11) does support native SVG as delivered.
Customer tested with Firefox and did not get the message. However, when the customer tests other environments at their location with their IE11 browser native SVG works ok.
The conflicting tests of IE11 working on other environments and FireFox working in the environment where IE11 does not makes this tricky to troubleshoot. We will use a test SVG file to test the browser locally and then test the SVG file on the web server to see if it works from both locations in IE11. If the file will test ok on the local PC, but fails on the web server, this indicates a web server configuration issue.
First, create a simple SVG file for testing. Simply create a new text document and copy and paste this into it”
Once this is copied into the file save the file to a known location on your computer and name it “simple.svg”.
Now, open your browser and find and open the simple.svg file that you created. If you can see part of a black circle in the top-left corner of your browser after opening the file, your browser should work with native SVG. The circle part will look something like the example below:
Testing the web server:
Place the same file that you tested on your PC onto your web server and using the web server url with the file location and name included, see if your browser loads the SVG file and if you see the circle part as you did in the previous test. If this test fails, you most likely have a web server configuration problem. If that is the case, please consult your web server vendor for help with configuring your web server to work with native SVG.
AcdntlPoet 2700019V2G Visits (8950)
Continuous Engineering for the Electronics Industry- See how IBM continuous engineering solutions can help you tackle the challenges and opportunities of Mobile, Internet of Things and Software and help you Define better, Design Faster, Develop Smar
Learn more about the IBM Internet of Things Continuous Engineering Solution: Tools, best practices and services to help organizations create the connected products at the heart of the internet of things. The IBM Internet of Things Continuous Engineering Solution is designed to help manufacturers create smart, connected devices for the Internet of Things. This solution helps teams adopt continuous engineering practices to address cost, time and quality challenges in delivering complex, connected products. IBM is now adding new product line engineering (PLE) features to help engineers streamline the design of product lines while reducing data duplication and the chance of design errors.
And don't miss the featured white paper: The
Arun K Sriramaiah 2700076GE8 Visits (1029)
Avoid using RTC CLI (SCM and LSCM ) accessing the NFS shares.
There have been issues where using scm and lscm from the command line with NFS shares is not working correctly. The details provided here will help you work around the issue.
* In case if the home directories of all users are mounted on NFS.
*If you see .nfsXXX file jazz-scm folder's daemon directory, it seems the lscm process is accessing the NFS share.
NFS file systems could prevent the CLI (scm and lscm) from working. Refer to article Tip: Using the SCM Command Line with NFS
Try the lscm commands with --config directory path is a non NFS file system option.
How to avoid --config option on every command run?
The alternative to providing --config option on the command line is to set the environment variable SCM_
This way the user does not have to specify the --config option on every command.