Scenario1: When query the workitem and change the state or work item to completed, it will popups the message to associate work
Scenario2: In plan editor the workitem change the state or workitem to completed, it will give error message to associate work
"Could not save work item.
No Duplicate work item was specified, click here to choose one."
Please find the steps below, where you can enable the Duplicate link association for work-items in RTC Query view and Plan view ( Inline Workitem editor & Plan Editor )
1. Open your project's configuration
2. Expand to the following section Project Configuration > Configuration Data > Work Items > Editor Presentations
3. In the Choose the Editor Presentation to edit drop down select your work item's inline editor (this would be com.ibm.team.workitem.web.inline for Defects)
4. Under one of the sections add a new presentation
5. In the Add Presentation dialog select Non-Attribute-based Presentation
6. In the Kind drop down select Links
7. Choose your shown link types, making sure to select Duplicated By
Note: if you use "duplicated by", it will still complain that you do not have the correct relationship from your workitem association.
"Duplicate by" would be the other end of the link for "duplicate of"
Work item A and work item B
Work item B was opened but it describes the same thing that is covered in work item A
You want to close work item B because it is a duplicate
You would need to add a link from B to A, a "duplicate of..." link pointing to work item A
Work item A would then have a "duplicated by..." link pointing back to work item B
Its like a tracks/contributes or a parent/child, two different parts of the same link.
Note: Please associate the workitem using the "Duplicate of" from the workitem.
Did you see the blog by one of our partners detailing the new features in ClearCase 9.0? The author provides key tips on installing and using the latest version of the tool. Additionally, included in the blog is a video explaining how to integrate ClearCase 9.0 with Visual Studio 2015. Click the link below for more details.
Meet Tanmay Bakshi who is a 12 year old IBM Cloud and Watson Advisor! You read that correctly, he's 12. Tanmay has accomplished so much during his short time on the planet and I would imagine he has much more to learn and share. How wonderful is it that he has found his passion so early in life?!?!?! People search their whole lives to find the "goodness". When he speaks about coding/IT/Watson/API/Technology his whole body beams with excitement. Take a look at his YouTube channel for more information about him and the topics he covers.
Specifically, check out this video where Tanmay talks about IBM Watson technology and how it's changing the way data is being interpreted and used.
An IT revolution is happening and our customers are using social channels for their support needs and requirements. A healthy social presence on technical support topics for Cloud products helps our clients to be successful and meet requirements. Social media knowledge sharing is one of the most trending topics right now. Unlike the structure of Knowledge Management, Social Media allows our support engineers to use their voice to be creative while creating meaningful content on reoccurring topics our customers call into support against. The fact that they are subject matter experts on these topics is icing on the cake. This content then starts the conversation and a crowdsourcing occurs where the community is sharing their own experiences and expertise. This engagement is where the value lies and relationships are formed.
Follow the Cloud portfolio on our various social channels and engage! You'll find the technical support content needed to solve problems.
There are scenarios where you could see a blank page while working with Rational Asset Manager (RAM). Once such scenario you might come across is a blank web page being displayed when you are trying to add a user to the Access Control black / white list in RAM via Administration > Configuration > Add user to Access Control [Black|White] list.
Failure to add user to the Access Control list issues can occur if the registered RAM user's information conflicts with the user registry information stored in your directory service protocol such as LDAP.
Are your users being managed via LDAP?
If yes, it is very much possible that the registered RAM user info conflicts with LDAP registry information.
You might as well see information about such failure to add the user to the list as an exception present in the log file. (the ramdebug.log)
This has been identified as a known issue and is logged as a defect
To resolve this:
First resolve any user conflicts.
Then try refreshing the page and attempt to add the user to the desired list.
You can find all the details on configuring users for LDAP integration here.
Data Correlation seems to be a buzz word when you are handling dynamic data values while performance testing.
So, what does "Data Correlation" / "Data Parameterization" actually mean ??
There are lot of explanations provided on internet and different approaches taken to describe this buzz word. As it relates to performance testing, this task is about how you make the script play around with the dynamic data values required in subsequent request to be sent to the server. In a single statement, you can define it as.....
"A request can include data that was returned in the response to a previous request. Associating data in this manner is called data correlation"
Many would now come up with questions like ....
1) How do I identify which values are dynamic in nature?
2) Which request would require this dynamic value prior to hitting the server?
...... and so on.
Remember that identification of these values are not generic in nature and depends on how the application under test respond and what it expects for delivering a successful response. These dynamic values would generally be residing in the response contents where upon identifying, you create a "Reference" value.
The subsequent request is in a state of accepting this "Reference" value. You now identify specific value in this request data (Substitution) where you want to substitute the "Reference" value so that it gets carried when the request is sent to server.
Probably now you might be thinking as to how to I start this task in RPT...
You would like to start at the substitution site.
Highlight the value right click and say Substitute -> Select Data Source.
Make sure to check the button include potential matches, and RPT will find all the places in the response where that value exists and show them to you.
You can then pick the one you want and RPT will create a reference for you.
You can edit the properties of the reference if the regular expression is not unique enough for you. RPT is built on regular expressions for harvesting the data. It uses java regular expression syntax and there are many web sites around that will guide you through that if you need it.
Also, in the latest version of RPT there are data correlation rules that you can apply to a test. So, if your dynamic value has a definite pattern you can write a rule for that correlation. You can then apply the rule set to each test your record and the correlations will be made automatically for you.
All set, but how do you validate which values were created as "Reference" and what are the "Substitution" entries??
Let's discuss this with an example here.
You can open the correlation rule file that was shared earlier in RPT and walk through the references and substitutions made.
In the below screenshot, there was a reference created for some value which is part of the response header field and was named as "X-ORACLE-BPMUI-CSRF".
Now, if you expand the "Create Reference" tree structure, you see the substitutions where this reference was used. In the below screenshot, the substitution was named as "X-ORACLE-BPMUI-CSRF".
You can copy the substitution name(X-ORACLE-BPMUI-CSRF) and search on the Test script name which takes you to those requests where the substitution was made by the reference that was created
As you are aware, it's important that you keep your products updated the the latest release. In some cases, older versions go out of support, which means if you run into difficulty within your environment it's possible that our technical support teams will not be able to assist you. In general, we have a wealth of documentation on our products (ie. tech notes, blogs, dwAnswer posts), however, nothing can replace the confidence of knowing that you can call into support to get assistance at any time.
Periodically, go out to the Announcements site below or the review the Support Lifecycle page.
You will want to know what Engine Room data means for a load test using Rational Performance Tester (RPT) and here I will explain how to capture the engine room data for data analysis or troubleshooting RPT agent related issues:
RPT Engine Room data
You can monitor the "RPT Engine Room" periodically on the agent locations to make sure that no unusual conditions are occurring in the location.
The Engine Room data is most useful if you have several samples as the workload increases. Like capture the data in regular intervals , for example for a 500 user load, you can capture the data for 100, 150 , 200, 250 users in HTML format. You can use the RPT agent "Engine Room" to try and deduce what is happening in cases where agents get struck or hung state or fails with driver failed error or when the majordomo service get terminated unexpectedly.
1.Collect the Engine Room data as follows:
You can reach the Engine Room from the RPT workbench by accessing the URL "http://<agentname>:<port>/" Where ,
'agentname' is the either the hostname or the IP address of the RPT agent machine where the load generator agent is installed and running &
'port' is the value the RPT agent uses to communicate with RPT workbench.
Your RPT schedule can have one or more agent location's. The first location started on a computer will use port 1903; the subsequent locations will use dynamically generated port numbers which you can find in the rptport.dat file in each location's deployment directory. (the deployment directory can be obtained from the Performance Schedule>Schedule element details from the RPT workbench from the controller machine. In the below screenshot, you can see the deployment root directory for the respective agent location is C:\temp
2.How to capture :
Once the schedule starts, bring up a browser on both/all of the agents systems and point to "http://<agentname>:<port>/" you can get information about the current number of users. It also shows the state of engine threads -- they should be OK, WORKING or IDLE.
You can save the data as an HTML file by using the browser's File->Save As option.
4.Interpretation of the collected data :
The data is displayed in four sections
Engine Counters (Various counters such as CPU, JVM Heap, I/O?etc)
Subsystems (need info)
Runner (need info)
Actions (need info)
Users are able to collect information about the current number of user?s and the state of engine threads (The state should be in the OK, WORKING or IDLE state). Deadlocked threads can be a the underlying problem for agents stopping in mid run. The current state of all active actions are also displayed.For larger
runs, users can focus on one of the user's and track any type of unexpected behavior.
Apart from the engine room log (*.html file) you can always zip the 'deployment_root' directory from the agent machine and share it with IBM Support team for any troubleshooting discussions via a PMR. This will help with analysis data for what happens during a agent run.
Now navigate to this agent deployment user directory from a respective agent machines.
Open the location D:\RPT_AGENT
Look for the deployment root directory.
Open the User directory to find the data.
The alphanumeric directory is created for each run and will have different timestamps. Note the timestamp at the time of execution to locate the exact execution data directory. You will find numerous file types listed from the execution data.
How does RTC eclipse client calculates the number of loaded content in the sandbox, when users loaded using load-rules?
Scenario: The term "loaded" refers to "share roots" and when using load rule file, the possible "share roots" are folders and files.
The Package Explorer will show the 'share roots'which are loaded and not the number of components, normally folders are loaded and become a share root (meaning anything inside them is automatically tracked, and changes there will appear as an unresolved pending local change). It is also possible to have files individually loaded (in which they are technically a 'share root' all in themselves); this usually doesn't happen when using the Load Wizard (where users normally load projects/folders), but this can happen when using load rules, since users are free to pick and choose what they want to load.
-The 'itemLoadRule' element in a load rule file will identify a single file, folder or symbolic link that is to be loaded as a 'share root'. The Package Explorer would count this as 1 for the number of 'things' loaded.
Below is the example : Assume that the folder "H1" and the file ".jazzignore" under comp2 are the ones that are considered as "share-root"s ? Which add up to the count of 2 loaded against comp2 ?
1. Loaded refers to anything that is loaded as a share-root. This anything here is one of File, Folder or Symbolic Link.
2. Any File/Folder/Symbolic Link pointed to by an itemLoadRule will be loaded as a share-root
3. Children of anything pointed to by a parentLoadRule would also be loaded as share-roots
4. Loaded count gets aggregated over the hierarchy
5. To see what is being loaded as a share root, remove the filter on *.resources in the Package Explorer and each of the share roots would be decorated.
You're invited to IBM Cloud Technical University, October 25 - 28, 2016!
A registration to one conference provides full access to two conferences: IBM Digital Experience & IBM Commerce Learning Academy 2016 and IBM Cloud Technical University 2016.
For the fifth time, the IBM Digital Experience & IBM Commerce Learning Academy 2016 and the IBM Cloud Technical University 2016 will be held together to reflect the integration of new topics such as Cloud, Rational and Tivoli and much more.
ALMtoolbox, our partners, have released a new video this week, introducing performance monitoring tool for ClearCase & ClearQuest.
For the first time, ALMtoolbox now provides a free Community Edition of their performance monitoring & alerting tool for ClearCase & ClearQuest.
You can quickly get email alerts when something goes wrong with any of your servers or clients. It monitors both application and IT Infrastructure layers, so you can resolve any issues much faster.
Please take a look at this overview of the social channels that now exists within Cloud Support. Included in the group are the (former) IBM Rational Channels. This piece was written by the current Social Program Director of Cloud Support Kim McCall.
You may be interested in finding the RFT version from many different machines -- for example, as part of an upgrade plan in an environment with a large number of users. You can automate this by having your script check Installation Manager's installed.xml file on the RFT workstation.
The installed.xml file will have this string if the workstation has RFT installed:
IBM® Rational® Functional Tester
The next line after the "IBM® Rational® Functional Tester" line will have the RFT version. Here are some sample lines for version 8.6.0.x of RFT:
Version 18.104.22.168 (8.6.7.RFTO8607-I20160302_1156)
Version 22.214.171.124 (8.6.3.RFTO8603-I20150303-1742)
Version 126.96.36.199 (8.6.6.RFTO8606-I20151126-1840)
Depending on how you want to do your check, you can pick a subset of the version string common to all 8.6 versions, for example:
We recommend verifying that installed.xml includes "IBM® Rational® Functional Tester" before doing the actual version check.
Other RFT versions have similar naming conventions as RFT 8.6. For example:
IBM® Rational® Functional Tester
Version 188.8.131.52 (8.5.1003.RFTO8513-I20140506-1125)
IBM® Rational® Functional Tester
Version 184.108.40.206 (8.3.2.RFTO8302-I20130323-1322)
We recommend that you verify the string formatting in your local installed.xml files to verify your version of Installation Manager is displaying these strings as you expect them.
The location of the installed.xml file depends on your Installation Manager configuration. If you do not know its location, you can determine it from Installation Manager as follows:
Start Installation Manager.
Do Help > About Installation Manager.
Click the Installation Details button.
The value of cic.appDataLocation is the path where installed.xml is located.
Users may encounter an error while importing test script scripts in Rational MobileFirst Platform Studio 8.7 that have been recorded in Rational MobileFirst Platform version 8.6. When the user initiate the test play back from the Ration Test Workbench it leads to the number point exception.
An internal error occurred during:"Launch Test". Java.lang.NullPointerException
This is likely caused by the fact that the user might not have followed the recommended order while setting up the environment or installed Rational Mobile First Plat form studio.
Here is how one can overcome the above error by setting the environment in below order:
2.Install Rational MobileFirst Platform Studio
3.Install Rational Test Workbench for Worklight
User should see Rational Mobile FirstPlatform Studio and Rational Test Workbench for Worklight as shown in the picture.
In today's world, mobile technologies and application developments are growing bigger and faster.
It is always a challenge to develop, test, and release the mobile app in no time without compromising its quality to meet the market demand.
Having said that, it is always expected that, mobile automation tools support for cross platform app testing. This plays a major role in reducing the time consumption of the tester during the test cycle.
IBM Rational Test Workbench supports cross platform in mobile automation testing for iOS and Android. Meaning, you can record Application Under Test (AUT) in Android and play it back on iOS and vice versa.
This significantly reduces the effort and time for the tester.
In such scenario, you might end up facing synchronization and object identification issues.
To overcome or identify the cause, you need to verify the following things:
1. AUT should be a hybrid application.
2. AUT should render with same layout and property values in both platforms (iOS and Android)
If your AUT renders with different layout and property values on different platforms, then it won't qualify for the cross platform app testing. This is the reason cross platform testing is not possible on native mobile applications.
Assume that your AUT is satisfying the above 2 conditions and you have recorded test script on Android device.
After recording, you are initiating the playing back on iOS device, it complains about the Synchronization policy as shown in the below picture.
In such scenario you have to set the right synchronization policy . For details more details on Synchronization policy Click here
Set the Synchronization policy to None and initiate the play back
Steps to set the synchronization policy::
1. Open the Test
2. Click on Launch Application
3. Set the Synchronization policy to None
Hopefully then, this will allow you to run your test scripts on iOS and Android.
While executing test scripts in Microsoft Visual Studio 2010 Integration mode within IBM Rational Functional tester (RFT) , you might run into an error or reach an ObjectNotFound exception result.
Based on experience, this could be happening for a couple of reasons:
1. The version of System.Data file was different in customization folder
2. Non-admin user does not have enough privileges in IBMIMShared folder and RFT installation folder (C:\Program Files (x86)\IBM\SDP in 64 bit OS or C:\Program Files\IBM\SDP in 32 bit OS) .
In either case above, here are a set of instructions that will potentially resolve this problem to help you get beyond the exception.
1. Navigate to C:\Windows\assembly folder and check the version of System.Data file
2. Navigate to .NET Framework installation directory C:\Windows\Microsoft.NET\Framework and open corresponding .Net framework folder and copy the System.Data.dll file.
3. Place the dll file into RFT customization folder in C:\ProgramData\IBM\RFT\customization
4. Launch RFT with Admin privileges and execute the script.
Like any customer-centric organization, we want the most effective, speedy solution to our customer's
technical support problems. With our emphasis being on quality of service and quick resolution, it is essential
that our engineers obtain the appropriate information while troubleshooting a problem. Problem solving
may seem straight-forward, however, every problem is unique and requires us to obtain different sets of information.
This video provides a few tips that will help lead to a quick PMR resolution.
Notice that there's a ibm-rit0.log.lck file as well. This means that the IBM RIT Agent service has the file locked and usually means there's some content that's buffered and not yet written to the log file. To flush the buffer and get the entire contents, stop the IBM RIT Agent service
Here's the same directory listing after stopping the agent
Directory of C:\Windows\System32\config\systemprofile
Restarting the stub will reset the ibm-rita0.log file to 0. So lets restart the stub using the default values, send an event to the stub, stop the IBM RIT Agent service and see what's in the log file
Sending the request
The stub was hit
The service was stopped.
Here's the ibm-rita0.log file:
As you can see there's no useful debug information on how the stub processed the request.
Starting the stub with different logging levels
The other two logging levels are Normal and Debug. This is selected in the Configuration section of the Start Stub dialog as shown.
Following the same steps as above, here's the content of the ibm-rita0.log file when Normal is chosen for the logging level
As you can see we have a better idea of the stub's processing stubs
Let's do the same steps again and this time use the Debug level for logging
Note that this is no different from the Normal setting. I'm unsure of the reason and am working with development on how the Debug setting should behave. Regardless both settings will allow the user to see what steps were executed when a stub was executed in RTCP.
Nevertheless, this demonstrates the technique of choosing the logging level for stubs and where to see the logs for a stub that's running in RTCP.
For a test to run correctly, a request that is sent to a server might need to use a value that was returned by a previous request. By ensuring that this data is correlated accurately, you can produce better performance tests.
Before you go ahead with the approach of correlating data in RPT, let's understand what does data correlation mean ??
Interactions with an application are typically related to each other. For example, consider the following interactions with a web-based application, in which each request depends on information returned from a previous response:
A payroll clerk types the web address for an application, which sends a login prompt. When the clerk logs in, the web server returns a page that indicates that login has succeeded and a unique session ID to the web browser that the clerk is using.
The clerk clicks a link on the returned page, which requests that the web server open the page for searching the employee database. The web browser includes the session ID when sending the request. Based on the session ID, the web server knows that the request comes from someone who is already logged on, and so opens the search form for the employee database. The clerk then searches for a specific employee. The web server returns a photograph of that employee and the employee's unique ID.
The clerk clicks a link that requests the web server to return the payroll record for the employee. With this request, the web browser sends two IDs:
The session ID, so that the web server knows that the request comes from some who is logged on
The employee ID, so that the web server can locate and return the correct information
In this example, request 2 depends on request 1, and request 3 depends on requests 1 and 2.
If you record these interactions in a test, before running the test with multiple users, you would vary the test data. For example, you would replace the user name and password values, the employee name search values, or both, with values that datapools contain. When you run the test, each virtual user returns a different employee payroll record, based on the contents of the datapools.
In a generated test, where data in a request depends on data that is contained in the response to a previous request, the request data is substituted from the response data on which it depends. The term for this internal linking of response and request data is data correlation. When you run a test with multiple users and varied data, data correlation is required to ensure that the test runs correctly.
The above scenario is just an example explaining the concept of data correlation. So what sort of approach would you take in case of SAP application testing. Let's look at another example here.
Say, you have captured some document number from SAP GUI screen and would like to pass that in next multiple transaction as shown in below screenshots.
For the above mentioned use case you will have to create a "Get" on that element (Vendor Number in this case)
1) Locate the element in the screen-shot (which is Vendor Number)
2) Right-click on it and say Create Element
NOTE:Use SAP Get as mentioned above. A new element gets added as shown below.
3) Now, highlight the value required and right-click and say "Create Reference"
4) You will get a reference that you can substitute the required values with.
More women are contributing to the field of technology and IBM and Girls Who Code are helping to make that happen. Please continue to expose girls to STEM fields. Below, find information for summer workshops for girls! LA, SF, Austin, and NY get ready! Find the details below and share!
There are several questions that arise when it comes to identifying performance bottlenecks and generating a refined performance report.
As you run any schedule in RPT, first it will have a run status as "Ramping" and later it goes to "Running" state. Once that run completes, then you get combined reports of both Ramp-up and Running state (Steady state).
So, you might ask: Is it possible to get separate reports for Ramp-up and Steady state (Running state) when you execute any schedule? Many customers have this question. If no, then do you have any settings in RPT to get such separate report of Ramp-up state and Running state???
In RPT "Change Rate" is synonymous with "Ramp Up" or "Ramp Down". So, if you do not specify a change rate there is no ramp. If can also use the word ramp when describing your entire schedule, using "ramping up"
to refer to multiple successive stages where the user population is increasing or decreasing.
Response times are reported all the time. View response times for stage duration time range, if you do not want to see response times for the other periods of a single stage, which includes ramp up, ramp down,
or settle. The purpose of the change rate for a single stage is to provide support for "ramp up" or "Ramp Down". Change rate in combination with steady state duration together provide the opportunity to get past any caching / calming / indexing periods prior to gathering response times during the stage duration.
So, the next question that generally comes up is:
Does the response time include or vary with users ramp up duration or with the think times that we specify ?
The answer is if you right-click on the default report and select "Choose Time Range" you can choose to get report information for any of the user levels. The response times shown for the chosen time range only include response time for the stage duration, not the time spent ramping or in lag.
The page response time does not include think time.
Response time are reported all the time including during ramp up.
Response time can be affected by the introduction of additional users during ramp up.
Steady state lag time was introduced just for this reason -- to allow participating systems to calm down or reach steady state after users have been added during ramp up. The time range for a particular stage (population of users) starts after ramp up and after steady state lag.