Contains details about some basic techniques for finding performance problems occurring within test or stubs constructed in IBM Rational Integration Tester, including stubs running in RIT-Agent as part of IBM Rational Test Virtualization Server.
This document will be helpful to users who have constructed a Custom Function previously for RIT and may wish to repeat the process without working through all the instructions in the manual again. The document may also assist users reviewing the process undertaken when troubleshooting.
This article explains how to reduce the number of Rational Team Concert component flow displays in the pending changes window.
By default, all components in a Repository workspace (RWS) use the current flow target. You can use the Repository workspace editor to add flow targets or change their scope so that not all components use them. Refer to Flow targets for further details.
For example, you can specify in flow targets for some of the components in a Repository workspace (RWS) display, or decide that a specific component must display in pending changes window.
Note: You can use the Flow Only components checked below option in RWS\Stream Flow Target to make the Pending Changes view more manageable. This helps more when streams or workspace have 10++ and which might not be all needed in the pending changes window.
Open a Repository Workspace (RWS) >> Flow Target >> Click Edit
Select Flow Only Components Checked Below
Click Ok and save the Repository Workspace (RWS)
Open the Pending changes window. You can see that only the selected components will appear in the pending changes window.
You can use the above scenario where in you can avoid unwanted components display in the pending changes.
There are various performance testing tools in market and it's advisable for a tester to do a sort of comparison in-terms of the product functionality.
Under Load Runner you could see following options made available
Run Vusers as process
Run Vusers as a thread
So there could be questions as to whether RPT provides similar functionality or how would the virtual user execution gets handled in RPT.
What happens if you specify 100 virtual users in the RPT schedule. Say, you have 10 user groups each one has an unique script with it. So10 users per user group, totally 100.
Whether it is treated as thread such that 100 threads will be running in the application during load test Or 100 unique processes?
RPT does not require a thread per test not it does require a thread per virtual user. When an individual action is being executed one thread is used.
One example of an action is an HTTP request. One thread is required to open a connection, if necessary, and write the request. Reading the response from the server is non-blocking. When data is available one thread is required to read the data and perform any processing required. Another example of an action is Custom Code. During the execution of the Custom Code exec() method one thread is required.
If you want to increase the number of threads you need to create a location file and run your user group on that location. You will need to open the location file in the test navigator and go to the General Properties tab. On this tab you will want to set RPT_VMARGS on the left hand side and the -Dvariables on the right hand side. You will probably want a max thread count of the total user load depending on how long you are sleeping and how long your loop takes to execute.
In general, For the HTTP protocol the RPT engine will create a thread as needed up to a maximum of 500 threads. The need to execute an action (eg request send or response) and unavailability of a free thread is what controls creation of additional threads.
Unless there is some specific problem it is highly recommended to let RPT control the creation and deletion of threads.
So, if you are looking it specifically from RPT perspective, then the agent starts execution with 10 worker threads
These 10 worker threads handle the actions performed by the virtual users. In case the Agent detects that 10 worker threads are not sufficient for the run, it will spawn more threads in runtime.The threads are shared by the VUs. So there is no 1-1 mapping between threads and virtual users.
However if you are interested to control the threads, then the following system properties can influence thread control. These apply to the execution engine (ie location or agent) so a general property for the location must be created to set these system properties. For each location you must create a property called RPT_VMARGS and set it accordingly.
The external help desk is the way Service Desk integrates with Rational Team Concert. When Service Desk users publish an incident to this external help desk, the integration creates a work item in a Rational Team Concert project area.
Create external helpdesk:
Complete the Incident Management with IBM Rational ClearQuest -> Register IBM Rational ClearQuest as External Service Desk using SPRO transaction.
Execute the SPRO topic.
Look at the list of External Service Desks. Your new service desk should be listed.
Find the Service Desk ID column (far right). Look at the value.
Open the Rational Connector and look at the System Guid value in the footer. Make sure this matches.
Note: ClearQuest was the first Rational Configuration Management tool supported by the Rational Connector. When support for Rational Team Concert was added, SAP did not update the SPRO documentation. This will be corrected in a future release of Solution Manager.
The logical port specifies the server on which IBM Rational SAP Connector is installed. It is used to transfer data to the external Defect Management Tool. The Web AS HTTP or HTTPS port is used, depending on the information in the URL.
Each logical port is attached to a consumer proxy. These ports are used by Solution Manager and Service Desk when they need to call the Rational Connector.
Steps to setup the logical ports:
Determine the logical port names corresponding to the Consumer proxy:
where: ExternalTestToolServer is the system running the Rational Connector ExternalTestToolPort is the http port used by the Rational Connector (not https) default 9080 ConsumerProxyName is the name of the consumer proxy
Verify the WSDL URL. If the URL is correct, your browser will show an XML document.
If you make a mistake with one of the WSDL URL, either Solution Manager or Service Desk won’t be able to access the Rational Connector when it needs to. This won’t happen until someone pushes a blueprint or tries to publish an incident to the external service desk for RTC.
Create the logical ports.
Verify the logical port for Test Data API,Service Desk API and Adapter API.
If you are looking for some help in getting started with integrations supported in the CLM and SSE solutions, you may want to check this article: Integration: Where to start?.
The Integration topics in the article (and in related sections) are intended to provide information about integrations supported in the Collaborative Lifecycle Management and Systems and Software Engineering solutions in terms of technology, maturity, and ability to support some high priority usage scenarios. It will focus initially on the ALM Core solution set of products, ensuring that the integrations are accurately and completely documented and validated for the scenarios identified. The focus will grow over time to include integrations with other IBM tools in the context of, and with third-party tools that have been identified as high priority by customers.
It starts with a description of ALM Core and continues on from there:
Create an initial identity for web services calls before the Rational Connector can exchange data with Solution Manager and Service Desk. This user is completely internal to the SAP Solution Manager configuration.
Create Technical User: Complete the External Test Management with IBM Rational Quality Manager > Create Technical User for Alias using SPRO transaction.
Assign specific roles:
SAP_SUPPDESK_ADMIN - Role for Service Desk Administration
SAP_SUPPDESK_INTERFACE - Third Party Interface Service Desk
SAP_TMT_INTERFACE - Third Party Interface: External Test Management Tool
SAP_TMT_WSDL_ACCESS - TMT: WSDL Access for user TMTALIAS
Activate Service and Create Alias: Complete the External Test Management with IBM Rational Quality Manager > Activate Service and Create Alias using SPRO transaction.
SAML is the authentication mechanism that is required for the exchange of data between SAP and the Rational Connector. It’s like two people exchanging keys to each other’s houses. Without the address of the other person’s house, the key can’t be used.
Steps to configure the Server Side for SAML authentication:
Log on to the IBM Rational Connector.
Click on Manage SAML Certificates. Update SAML issuer name.
Export the certificate from the Rational Connector. This will generate a certificate named ConnectorSAML.crt file.
Import the certificate into SAP Solution Manager.
Be sure to un-check SAML 2.0 for the provider you just added. It should only support SAML 1.1
Export the SAML certificate from Solution Manager to generate a Trust certificate.
Example: Trust ManagerSAML.crt
Import the SAP Trust Certificate into the Rational Connector.
In continuing on the topic of Performance testing, Vaughn Rokosz takes a look at some of the common reasons performance tests can fail, and suggests ways of tuning your servers to avoid the common issues.
Take a look at his latest article that includes the following:
Looking for information about how to build performance simulations?
Building a good simulation of a user population requires expertise at many levels, including:
An understanding of what the user population is likely to be doing, and how often.
Experience with the performance simulation tool of choice.
Ability to reverse-engineer the protocols sent to the Jazz server, in order to debug automation scripts.
In the following article Vaughn Rokosz, a technical lead for the CLM performance team, shares some of his experiences with building performance simulations of the Jazz products. He walks through a simple example demonstrating how to build a simulation of a user population that is creating work items in Rational Team Concert. He also shares some of the things used to make the development of performance simulations simpler by attaching the Rational Performance Tester project that he used when working through the example.
A new article “Rational Team Concert essentials: A developer's perspective: Part 2. Delivering Work Contributions” has been published. It is a continuation of the conversation started in the Part 1 article published earlier this year.
This article series describes many operations a developer is responsible for from the time they join the project to the time where they are ready to deliver their features and fixes. This series is meant to be a helpful collection of "developer's cheat sheets."
A developer's perspective: Part 1. Joining a new team projectExplore the main concepts behind Rational Team Concert's change management mechanisms, joining a team project, creating a repository workspace and loading your project components and artifacts. In Part 1 of this series, you will learn how, step-by-step, to join a team project, create a workspace to contribute to your project, and load existing artifacts from your team’s project components.
August 13, 2015, 12:00 PM to 1:00 PM (Eastern Time)
Chat with us live during the webcast! Join the CrowdChat to interact and network with fellow webcast attendees!
In this session we will introduce IBM Static Analyzer (now in beta) and show how it greatly simplifies static analysis (or white box) security scanning. We will discuss and demonstrate how it can easily integrate into the development lifecycle, as well as how it uses advanced analytics to produce targeted/actionable results to enable you to remediate security vulnerabilities.
David Marshak, Senior Product Manager Application Security, IBM Security
David Marshak focuses on the business and product strategy for IBM’s Application Security portfolio, including the AppScan product line, cloud offerings, and strategic partnerships.
Jason Todd, Lead Developer AppScan, IBM Security
Jason Todd is a lead developer for the IBM Security AppScan Source product. He has worked on this product since 2011 and heads up the team that is delivering Static Analyzer in the Cloud. He also continues to provide technical guidance and support for the IBM Security AppScan Source 9.0.2 product.
Kris Duer, Lead Analytics Developer AppScan, IBM Security
Kris Duer is a Java programmer with a specialty in analytics. He has worked in the application security field for the last five years on the static analysis tool IBM Security AppScan Source. His particular specialty deals with applying analytics to refine the security result set down to an actionable list.
***Dial in codes will be sent a few minutes before the webcast and posted in the online meeting. Please check your email before 12:00 PM ET (sender is firstname.lastname@example.org)
By registering for this webcast you are allowing the GRUC to provide your information to IBM and/or webcast sponsors for direct contact regarding IBM products and promotions. You will also receive a complimentary membership to the Global Rational User Community.
If you are looking for a simple tool to perform GET operation(s) on IBM Rational Quality Manager (RQM), then you are looking at the right post.
The RQM artifacts are exposed in XML format through the RQM's Reportable REST (REpresentation and State Transfer) API's. If you are in need to perform a quick GET (get XML) of the RQM artifacts without having to worry about the REST syntax and other nitty-gritties, then the "RQM GET" utility is a good option. This utility is built on the basis of the RQMUrlUtility and provides an interactive way to download the artifact XML using the artifact web IDs.
Here is a sample command line trace of events on the RQM GET utility used to download artifact from an RQM server.
C:\>java -jar rqmget.jar -user clmadmin -password clmadmin -url https://clm.net:9443/qm
<<<<<----- [ RQM GET Utility ] ---->>>>>
RQMGET: Connecting to RQM server at : https://clm.net:9443
RQMGET: Exploring RQM project's at : https://clm.net:9443/qm/
RQMGET: Found 7 project areas....
ID | Project Area Alias
1 | RQMprojectx1
2 | TestRQM_x2+%28Quality+Management%29
3 | TestQM
4 | TestRQMproject
5 | TestRQM_team1+%28Quality+Management%29
6 | CLMintegrationTest
7 | testCLM+%28Quality+Management%29
Enter index of project you would like to explore:
Enter index of artifact type to explore in "TestRQM_x2+%28Quality+Management%29"
1.Test Plan --> [4 Found]
2.Test Suite --> [3 Found]
3.Test Case --> [31 Found]
4.Test Script --> [27 Found]
5.Test Case Execution Record (TCER) --> [100 Found]
6.Test Suite Execution Record (TSER) --> [4 Found]
7.Test Case Result (TCR) --> [100 Found]
8.Test Suite Result (TSR) --> [3 Found]
9.Test Environment (TE) --> [14 Found]
10.Test Phase --> [0 Found]
11.Keyword --> [2 Found]
To download "Test Script" XML --> Enter 
Enter integet other than 1 to exit
Enter the id(s) of "Test Script" you want to download [eg: 1 4 7]?
RQMGET: Test Script #13 from "TestRQM_x2+%28Quality+Management%29" is at: C:\\Test Script__13.xml
RQMGET: Disconnecting from the IBM Rational Quality Manager server.
You could see the 'Additional delay' under Client processing delay section for each request, under Advanced tab.
What happens if you make this delay to zero??
Does the play back show much difference in overall response time?
Generally, the time taken to load the entire page outside IBM Rational Performance Tester (RPT) cannot be mapped to the time taken to load the same page when invoked via RPT recorder. There are lot of parameters that RPT, as a load performance testing tool, accounts for (such as the traffic and connection information).
For Example: A page consists of several connection streams executing page elements in parallel where a page element is one part of the page such as an image or client-side script. Page element response time is the time from the first byte sent to last byte received. The response time for a page connection stream is the total of all of its page element response times plus any additional time establishing connections, if any. Page response time is the maximum response time of all page connection streams. Processing overhead time for data correlation, action scheduling delays under stress, Custom Code processing or HTTP processing are excluded from page response time. Each page element also has a delay associated with it. This delay is time that was observed while recording the page. If one page element has a direct dependence on the response from another page element, RPT honors this dependence. The additional delay may or may not be significant but RPT includes it by default in an attempt to behave as much like the browser as possible. It may very well not be significant and you can therefore reduce it or eliminate it.
Also, RPT processes these Client side delays in parallel or sequentially depending on how the application server returns them. Sometimes if you try modifying/disabling the client processing delay value, it may also disable the immediate transaction under Rational Performance Tester. Because, practically speaking, when you disable a request, you potentially invalidate some delays because in theory the disabled request could have been the basis for a delay. Therefore, RPT automatically recalculates the delays on the page when you disable a request. That is, if a later request used the disabled one as a base, the later delay request should be adjusted.
Now, let's come to your question about why such discrepancy between the response in the browser and RPT...
Often times when you see response times that are inaccurate it is because of recording your actions too fast and so you end up with more than one page being combined together. When recording, you need to be mindful and make sure to pause between mouse clicks. Do an action and wait 5 seconds before continuing on to the next action. You also need to pay attention to where your mouse is and make sure you have no "hover gifs" that you accidentally cause to be sent to the server while recording.
To see if you have these problems you have two things you can look at.
1. Open the test and click on the name of the test under test Contents in the tree. Click Select->Request. On the right hand side you will see a table of all requests in your test. Look at the delay column and look for really large delay values. If you have these, they are usually caused by recording too quickly, and act as an "embedded" think on the page. The Client Delay is supposed to simulate how long your client took to process the data from the server before sending the next piece of data. When you record too quickly, these values can be skewed. You can also go to a specific page that is taking too long and look at those specific requests. You can use the same Select button, or you can go to the Advanced tab of each request and look at its delay.
2. If you find the delays are your problem, then you can split the page where the delays are long since this was probably supposed to be two pages. This will move that delay to be the think time of the page which would be more accurate. You can also go to Window->Preferences->Test->Test Generation->HTTP Test Generation and change the option Create new page if delay is greater than 5000ms. You can make that smaller and see if you get more proper pages generated. Your other option is to rerecord but take your time between pages and make sure to count to 5 seconds before performing the next action.
Also remember that when doing recording, we will capture each connection that was used and will send out the requests on the same connection that they were sent on originally. If you had two connections sending requests at record time, we will send requests on two connections at execution time. If you see a really long client delay, then look at that request in detail to determine if it was something that was sent by a user gesture or as a result of the primary request.
The idle standby configuration enables recovery from failover to help ensure minimal impact on business operations during planned or unplanned server outages and Idle standby deployment for crash recovery.
Important: Jazz Team Server applications allow only a single server to be active at any one time to a repository; therefore the backup (or Idle) server is configured to never run asynchronous (or background) tasks. If a switch is made to the backup server, you must plan to bring the primary server back up as quickly as possible.
Index corruption can be caused by Network, database or application server issues, to name a few. Below are the steps to fix the index corruption issue on active server, using the idle standby server.
Index corrupted occurred on the Active server and you were unable to fix it by running the repotool command on it. So we can still fix the index issue by running a repotool command on the idle standby server.
Fixing the index issue on active RTC server using the Idle standby server.
Below are steps to fix the index problem, using the IDLE standby server. The standby server is on a different machine (and pointing to a different jazz_home than the active one).
1. Suspend the Indices on the Active server
2. Shutdown the WAS on the Active server
3. Startup the IDLE standby server
4. Ensure it is working fine... in Standby server
5. Shutdown Standby server
6. Perform all the reindex
7. Startup the IDLE standby server
8. Ensure all Diagnostic are fine for both JTS/RTC from the GUI interface
9. Suspend the Indices on the Standby Server
10. Shutdown Standby Server (WAS)
11. Tested this, rename the /conf/ccm folder to /conf/ccmorginal
12. Test: copy /conf/ccm from the IDLE Standby to the location of the Active Server
13. Test: run the repotool -reindex .... still the same error received from repotool
14. Delete the /conf/ccm (Copied from IDLE standby server0 in the Active Server
15. Test: rename back the /conf/ccmoriginal to /conf/ccm
16. Startup Active Server (WAS)
17. Check the Diagnostic from the GUI again, all fine.
Note: The repotool command can be used in idle standby machine to fix index issue, where it is not working on the active server machine.
There are many instances where you get reports indicating higher page response times in the IBM Rational Performance Tester (RPT) generated reports. You tend to correlate these metrics against the usual time behavior noticed when accessing the application under test outside RPT. It could be a case as well where even if you add all the page elements listed in report, the response time is much less compared to overall response time for the transaction.
So let's see how RPT as a load generation tool computes response time metrics.
Generally, the response time is measured from the time the first byte leaves the client machine and until the time that the last byte is returned.
The page is considered to begin when the first action (typically a request) associated with the page starts execution. If this request needs to make a connection, the page response time will include the connection time. The page ends when the last action (usually a response) of the page completes. RPT would adjust the page response time to remove the time spent in client delays. In RPT 8.2 this changed after determining it was more representative of the real-world response time to leave the client delays in the page response time. Thus, in RPT 8.2 and later, the page response time does include client delays.
However the Think Time value set in the schedule is not included in the page response times displayed in the Performance report. It should be noted that there are potential "delays" in the script that could increase page response times. Think times are associated with pages and are intended to represent human pauses during recording - these don't impact page response times. For individual page elements (requests), there may also be delays which are intended to represent client (browser) delays (processing time for example). These will be reflected in page response times
Questions could arise such as: How does RPT calculate the Page level response time as compared to Request level response time for a given page?
In case the first request used a new TCP connection, it calculates the difference between the timestamp when the connection for the first request in the page was made and the timestamp of when the last byte was received for the last request in the page. In case an existing connection was reused, the timestamp of when the first request was sent is used. All these timestamps are available in the test log if full logging is on. If a existing TCP connection is re-used, the value "Time to Connect" in the test log will be 0 .
Rational Team Concert server is expected to have a common user base management. In order to correctly perform process enforcement for Git operations by Rational Team Concert it is imperative that the identity of the Git user be known to Rational Team Concert. Therefore, the need to have a common user base across Rational Team Concert and Git server (Apache HTTP server) via a different LDAP arrangement.
We can still get the integration done using a common user name with different user base across Rational Team Concert and Git server via an different LDAP arrangement.
Example: Git is configured with Apache DS LDAP and RTC configured with different LDAP registry.
Note: Common user id used for both GIT and RTC using different registry (with different password) and provide all necessary permissions.
Below are checkpoints for verifying the integrations:
1) Verify the GIT login, just to ensure GIT login is fine
$ git remote show origin
2) Verify the GIT GIt repo configuration
Update respective "repokey" and "repourl" information
3) Verify the GIT GIt repo configuration
Update respective "pre-receive" and "post-receive" hooks information
Note: The above configuration will give a clear understanding about RTC-GIT integrations and using a common user name with different user base across Rational Team Concert and Git server via different LDAP arrangements.
Post upgrade of your CLM application you may come across the below error while trying to access any Project Areas in Rational Quality Manager.
The error occurs when QM is not deployed correctly which could be the result of an incomplete / improper upgrade process and when WebSphere cache was not cleaned prior to deploying 5.0.2 war file.
Checking the logs reveals the below error:
2015-07-14 11:16:53,549 [ WebContainer : 2] WARN ComponentVersionMismatch - CRJAZ1041I The component is installed in the database but is not present in the server: com.ibm.rqm.reporting
2015-07-14 11:16:55,077 [ WebContainer : 2] ERROR ompatibility.internal.JtsConfigurationStateService - CRJAZ2679E The JTS version could not be determined because the JTS rootservices document at "https://<FQDN>/jts/rootservices" could not be fetched or does not have an about services URI.
You may follow the below steps to redeploy QM:
1. Undeploy QM.war
2. Clean the WebSphere Cache by following these steps:
a) Stop the WebSphere service
b) Delete the files from the below locations:
4. Redo the group mapping in WebSphere by following the below steps:
Map security roles to a user or repository group:
a) Go to Applications > Application Types > WebSphere enterprise applications.
b) Click the jts_war application, and open it for editing.
c) In the Detail properties section, click Security role to user/group mapping.
d) Select a specific repository group, such as JazzAdmins or JazzUsers, and click Map groups. These repository groups are associated with every Jazz implementation and must be mapped to a particular group that contains the authorized users. If you are using LDAP, these groups must be set up on the LDAP server prior to completing this mapping. If you are mapping these repository groups to individual users, select the repository group and click Map Users.
e) Enter a search string to return your group names from the LDAP server. Click Search to run the query.
f) From the list of available groups that is returned, select the particular group and move it to the Selected column.
g) Click OK to map the LDAP groups to the Jazz repository groups.
h) Map the appropriate LDAP group for all Jazz repository groups:
In IBM Rational Reporting for Development Intelligence (RRDI) or Insight, custom enumeration attributes and values are not stored in the same tables and need other joined queries to access the values in your report.
To create an enumeration report, follow the below steps.
1) Launch Report studio, select create new report, from the package select Blank template and move to Query Explorer.
2) Create three queries for Work-item, Enumeration and Enumeration literals.
2.a) To create Workitem query use "Request Area -> Request" form ODS and choose the field that you want to display. Make sure you add the field "Request ID".
2.b) To create Enumeration query use "Request Area -> Request Enumeration" from ODS and choose the field. Make sure you select the field "External ID".
2.c) To create Enumeration literals value query use "Request Area -> Request String Extension" from ODS and choose the fields. Make sure you select the field "Request ID".
2.d) To filter against a specific Enumeration, add a filter to Name field and use the value of Enumeration ID in Enumeration literals value query.
3) Now from the Query Toolbox use Join option to join the above created queries.
3.a) Create a join query and name it "Workitem_Enumeration literals". Join the query for Workitem (2.a) and Enumeration literals (2.c).
Click on Join button to link the common key select "Request ID" from Workitem and Enumeration literal queries.
Select the fields from both the queries Work-item (2.a) and Enumeration literals (2.c) and place it in Join output query.
3.b) Create a join query and name it "Enumeration". Join the query for "Workitem_Enumeration literals" (3.a) and Enumeration (2.b).
Click on the Join button to link the common key, select "Value" from "Workitem_Enumeration literals" and "External ID" from Enumeration queries.
Select the fields from both the queries "Workitem_Enumeration literals" (3.a) and Enumeration (2.b) and place it in Join output query.
4) Now insert the columns of query 3.b in report and run the report.
To the faithful followers of this blog, we promise to continue to provide you with the meaningful content and posts that help you to be successful. We want to encourage open collaboration with our followers and appreciate new ideas about topics you want to know more about. Find out a little bit more about us below!
Joined Rational support back in 2000, holds a BA in Information Systems and has over 35 years experience in IT (for those who remember programming on Punch Cards). I've held various positions as a Technical Support Engineer, Knowledge Engineer and most recently as a Social Business Analyst where I am most excited to be delivering support messages through this blog.
About Naomi :
Joined Rational Support in 2006 as a Software Engineer. I have nearly 15 years experience in IT and hold a Computer Science degree. In August, I will begin pursing an MBA advance degree. I am currently a Social Business Analyst for a subset of products within IBM.
This article discusses how you can identify the "Actual Start and End Dates" in Rational Quality Manager.
There will be instances where a testing effort would be delayed and one needs to capture the actual date it started as apposed to the planned date it was supposed to be started.
The planned start and end dates are inherited from the dates in the timeline, as are the actual dates that are shown in brackets next to the iteration name.
Initially, the planned dates and actual dates are identical. To update the actual dates, you will need to use the timeline editor. When you make updates in the timeline editor, the new dates are reflected in the iteration name. This does not change the planned dates. To change the planned dates, you need to make that change in the test schedule and not on the timeline editor.
This will not have the exact word called 'Actual'. However, you may reference this date and compare it with the planned dates.
The following steps lists the steps you will need to perform to edit the Actual dates:
1. Navigate to 'Test Plan > Test Schedule'
2. Select an iteration and Click the edit icon
3. Edit the dates under the Planned Start Date and enter the description as required
Also, the other way is to define the "Custom attributes" in the test plan where you will need to manually enter the Proposed / Actual Start / End dates.
However, with this you cannot capture the actual dates for each iteration. This gives you an option to custom define attributes and compare manually.
When you are defining the custom attributes you may mark the 'Planned/Proposed Start and End Dates" as "Required" so that one cannot save the test plan without specifying the planned dates.
While creating the test plan this should look like below:
You can use the "Browse Test Plans" and compare the 'Actual Start / End Dates' Vs 'Planned / Proposed Start / END Dates".
Rational Performance Tester supports the TN3270 protocol. However, the TN5250 protocol is still undergoing testing.
The recording of the application using TN5250 protocol is possible using the Socket protocol, as there is no specific protocol recorder provided for this. Identifying and debugging issues which arise during a re-run of the recorded test is difficult using the Socket protocol. Although it is not advisable to use the Socket protocol; there is no other option currently available to use Rational Performance Tester with applications using the TN5250 protocol.
Correlation of cookies is not mandatory in RPT. During playback some values which are recorded during scripting are sent to the server to be processed, but these values most of the time are dynamic in nature, and they do not form the page content during response from the server. Cookies are usually cached to be reused by the page, but these do not give the correct response time for the page content. Hence, some times the test is run by disabling the cache. To summarize, no it is not necessary to correlate cookie in RPT, as they do not form the basic contents of the page.
It is somewhat of a bitter-sweet day for me as I hand off ownership of this blog to a new team of colleagues to take this space and bring it to even higher levels of value. I'm letting go of this blog as part of my transition in to the IBM IoT support organization where I am working on social business strategy and logistics like I did for Rational Support all these years.
As you may have already noted from the about section to the right side here, I am leaving this blog in the very capable hands of Denise McKinnon and Naomi Guerrero. I couldn't be more pleased to leave this institution in their hands, much like Kelly Smith did for me a few years back. Rest assured, you will get the same if not better information and value from this blog moving forward as you did previously.
IBM IoT Support is a team of IBMers who are now part of the new IBM Internet of Things organization supporting the tools makers like you need to build components and connected devices. IBM IoT Support is focused on helping you, the makers, with your product questions by providing content relating to the various products covered by our new division.
Through our focused support of asset management and continuous engineering tools, we are here to provide you with the best support in the industry; to help you be successful with the applications and components to ensure your work on the connected devices in the Internet of Things brings you the right value.
The products we support here include:
IBM Maximo family
IBM Tririga family
IBM Rational DOORS family
IBM Rational Rhapsody family
IBM Rational Requirements Composer
IBM Rational Engineering Lifecycle Management
There's no change in the way you will obtain support for the products you already own, the only change you'll likely see is the addition of a few new social channels like our blog, our new Twitter account, and our new Youtube channelto help get you the right content at the right time. Our technotes can all be found in their same locations per product, and the process for contacting support to open a Problem Management Request (PMR) remains the same as well. We hope you'll follow us in our new spaces!