Capturing and filtering additional logs during scripts execution in IBM Rational Functional Tester

When using IBM® Rational® Functional Tester with Eclipse-based applications, the failure information provided in Problem view snapshots is quite limited for effective problem determination and defect analysis. This article explains how you can capture the details in the problem view as well as the error log, filter this information, and customize it effectively to improve problem determination and defect identification. The references to workspace logs can be extended to any system under test that generates a log file.

Manish Aneja (, Staff Software Engineer, India Software Labs, IBM

author photoManish Aneja is a staff software engineer. He works as test lead for the IBM Rational Application Developer Portal Tools team in the IBM India Software Labs, in Gurgaon. He has worked on automation for reliability and functional tests. He holds a Masters degree in computer applications.

Awanish Kumar Singh (, Associate Software Engineer, IBM

author photoAwanish Kumar Singh is a software engineer at IBM India Software Labs in Delhi. He works in the Rational Application Developer Portal Tooling team and focuses on automation and functionality Verification testing.

22 June 2010

Also available in Chinese Russian


Various automation tools enable testers to record and play back a test script and then to reuse the same script at a later time. One basic criteria to judge the quality of automation is by having a stock of the verification points that the script contains to catch predictable as well as unforeseen errors. IBM® Rational® Functional Tester supports multiple types of verification points (for example, Data, Properties, and Image verification points), which are inserted into automated test scripts to cover typical problems that occur in applications. However, in some cases, verification points do not apply, or the captured data is not useful in problem determination. In these cases, instead of adding verification points to the test script, testers must code the validation mechanism .This article shows how to do this.

Eclipse is an IDE (Integrated Development Environment) that is used mainly for development purposes. IBM Rational Functional Tester is built on Eclipse. This article assumes that readers are familiar with the Eclipse environment, Rational Functional Tester configuration for applications under test, and recording and playing back test scripts, and the content of test scripts, and therefore does not cover these areas in detail. We have used IBM Rational Application Developer (an eclipse based application) as the application under test.

The most common use case in IBM Rational Application Developer is to create a new project with a wizard. After you close the wizard, you have a project that is similar to a Rational Functional Tester project and, depending on the type of project you created, contains java, XML, and some properties files. Most of the other use cases related to a project involve GUI-based operations such as dragging and dropping a text box to a .jsp file, which results in changes to the new project.

IBM Rational Application Developer reports any compile time errors or warnings in the Problems view. Any exceptions reported during runtime are stacked in the Error Log view (log file of the workspace). Testers record the test script and insert verification points based on the functionality to test, and they check these two views for any error reported during the test case run.

Because the Problems view reports compile-time errors, it is simple to use a verification point to detect these errors. Rational Functional Tester marks the test case as failed and provides, in the Problems view, a snapshot of the error reported.

However, it is not always feasible to use verification points to capture the complete details associated with the application under test. This is because in the Error view, exceptions are caused when the task is being performed. Therefore, your operations on the application under test might be a success, but they might not be clean because of the exceptions or warnings in the log file.

In the Problems view, the limitation arises when the errors so numerous that they cause the view to scroll. Because of this scrolling, you cannot see all of errors in the failure snapshot that Rational Functional Tester provides.

In the Error view, the limitation arises because the detailed stack trace for the exceptions cannot be reported through this snapshot.

Tracing both of these views is required for better problem determination and minimizing the risk of skipping a defect.

This article explains how you can capture the Problems view and the error log of the application under test, filter them, and customize them to determine problems and identify defects. The references to workspace logs in this article can be applied to any application under test that generates a log file.

Preparing Rational Functional Tester to test Eclipse-based applications

To use IBM Rational Functional Tester for Eclipse-based applications, you have to configure the application and enable the environment for testing. To configure an application:

  1. Open Rational Functional Tester.
  2. In the Main menu, click Configure > Configure Application for testing. The Application Configuration Tool window opens (see Figure 1).
  3. Add details about the application that you want to test.
  4. Click Finish to save the changes.
Figure 1. Configure your application dialog
Application Configuration Dialog

To enable the Eclipse environment for testing:

  1. Click Configure > Enable environment for testing. The Enable Environments window opens (see Figure 2).
  2. Select the Eclipse instance, and click Enable. If the Eclipse environment is not listed, click Search.
  3. Click Finish to save the changes.
Figure 2. Enable Environments dialog
Enable Environments Dialog

Recording and playing back test scripts

There are multiple ways to code test scripts, depending on the skill of the tester and the complexity of the application under test. Like other automated testing tools, Rational Functional Tester enables a tester to simply record GUI operations and play them back against an application under test. A tester can also code a test case manually i.e. without using the recording capabilities of the tool and capture the test objects later by identifying them using tool.

Capturing and filtering the additional logs

Getting this additional log information is a two-step process: first, capturing the additional informative logs, and second, filtering the relevant information for speedy postexecution analysis.

These log collection methods should be placed precisely between the end of the scenario in a test case and just before you plan to restore or clean up the test environment for executing next test case.

Capturing additional logs

Usually multiple test cases are added to a test suite and the complete suite is run. After every test case, the application under test is restored to its initial state, which involves cleaning up the test data created during the test case run.

The errors reported in the Problems view are the project-specific compile- time errors. Therefore, after the run completes, the project is deleted, and the information is lost.

In contrast, the information in the log file is retained as the tester actions continue. The limitation with the log file is that at the end of test suite you are not able to determine which exception belongs to which test case.

Therefore, before a test case run, your must clean or delete the log file to ensure that the log entries belong only to the most recently run test case.

After the log file is cleaned or deleted, code the file-handling methods to fetch the details from this log file and place these methods into the test script after the test execution methods or code. To achieve the best results, record and maintain test scripts in an atomic format; that is, one use case per test script.

Capturing information from the Problems view

As mentioned earlier in this article, Rational Functional Tester–provided verification points can point out the occurrence of compilation errors and can provide a snapshot of the Problems view. but for better problem determination and analysis you need to fetch the complete list, which may not be displayed in the snapshot. The following method explains how to fetch this list.

Give the argument passed in the method the scenario name for which the data will be captured in log file. This relates the data in the ErrorDetails file to the scenario, because the entries in the log file are preceded by the scenario name.

Sample code to collect data from the Problems view

The following sample code first opens the Problems view, then checks whether the Problems view contains an entry. If an entry is found, the code checks whether the entry is an error (the view lists warning, as well as error, messages). If the entry is an error, the code creates the file ErrorDetails.text in C:\TestResults.

Listing 1: Sample code to collect data from the Problems view
public void errorDetails(String headline_ErrorFile) { 

// you can pass your test base headline to errorDetails method as an argument 
String path_ProblemView = "Window->Show View->Problems"; 
// Path or steps to open the Problems view to capture data 
String path_ErrorDetails = "C:\\TestResults\\ErrorDetails.text"; 
// The path where additional log file will be created. 
AppObject.ObjectMapper p1 = new AppObject.ObjectMapper(); 
ITestDataTreeNodes tn= p1.getProblems_View().getTreeHierarchy().getTreeNodes(); 
int rootcount=tn.getRootNodeCount(); 
if (rootcount != 0) { 
   String Heading = tn.getRootNodes()[0].getNode().toString(); 
   if (Heading.contains("Errors")) { 
      int entry = tn.getRootNodes()[0].getChildCount(); 
      p1.getProblems_View().click(atPath(Heading + "->Location(PLUS_MINUS)")); 
      String heading = headline_ErrorFile + "\r\n"; 
      StringBuffer str = new StringBuffer(heading); 
      int ErrorCount = 1; 
      while (ErrorCount <= entry) { 
        String Source = " "+ ErrorCount + "-" + 
        tn.getRootNodes()[0].getChildren()[ErrorCount - 1].getNode().toString()+ "\r\n"; 
        str = str.append(Source); 
try { 
    BufferedWriter out = new BufferedWriter(new FileWriter(path_ErrorDetails, true)); 
    } catch (IOException e) { 
         System.out.println("Please check the file "); 

Capturing information from the Error Log view (log file)

In addition to the Problems view, where compile-time errors and warnings are reported, runtime exceptions are reported in the log file of the Eclipse workspace. Therefore, you must check this file after each scenario is executed as well as during the test run. Checking the file after each scenario ensures that you can determine which exception belongs to which scenario..

The following sample code first checks if an entry is made to the log file after the test case completes. If there is no entry in log, the code skips the rest of the method because test case execution is clean and there is no information to capture. If the log file contains any entries, the code creates a text file based on the scenario name and, scans the log file, and writes the entries to that text file.

Listing 2: Sample code to collect data from the log file
public String workspace = "C:\\Sampleworkspace\\.metadata\\.log";
// Path from where to read the data to capture, it can be passed as //argument in method.

publicvoid createLogs(String LogPath) {
/* LogPath will store the location of new file to store the information for e.g.
 "C:\\TestResults\\" + "Test Scenario" + ".txt"; */
AppObject.ObjectMapper p1 = new AppObject.ObjectMapper();
p1.getMenu().click(atPath("Window->Show View->Other..."));
p1.getShowViewTree().click(atPath("General->Error Log"));
int entry1 = p1.getErrorLog_View().getTreeHierarchy().getTreeNodes().getRootNodeCount();
if (entry1 != 0) {
    try {
         Scanner scanner = new Scanner(new File(workspace));
         StringBuffer str = new StringBuffer("Complete Stack traces of All components");
         while (scanner.hasNext()) {
               String lineData = scanner.nextLine();
               str.append("\r\n" + lineData);
               try {
                  BufferedWriter out = new BufferedWriter(new FileWriter(LogPath, true));
                    } catch (IOException e) {
                             System.out.println("Please check the file ");
        } catch (FileNotFoundException e) {

Filtering the captured information (logs)

The previous examples capture the additional logs but do not ascertain their relevance. During problem determination and problem analysis, however, teams are primarily interested in the errors or exceptions that are related to their component. This section discusses how to filter the captured logs. To filter logs, you pass a string based on the information you want to be filtered, such as your component or plug-in name.

You pass the location of filtered log file as a parameter that can use the scenario name (for example, C:\TestResults\Test Scenario_Filter.txt) along with a string which we will use to filter the log files. LogPath is same which is used in above method, so it has to be set to the captured file which has to be filtered.

Listing 3: Sample code to filter/sort the desired logs
publicvoid createFilterLogs(String Filter, String LogPath, String FilterLogPath) {

try {
    Scanner scanner = new Scanner(new File(LogPath));
    boolean stackFound = false;
    StringBuffer str = null;
    while (scanner.hasNext()) {
        while (scanner.hasNext()) {
           String lineData = scanner.nextLine();
           if (lineData.startsWith("!STACK")) {
               stackFound = true;
               String heading = "Stack Trace of our Component \r\n";
               str = new StringBuffer(heading);
           if (stackFound) {
               str.append("\r\n" + lineData);
               if (lineData.equals("")) {
                   stackFound = false;
        if (str != null) {
          if (str.toString().contains(Filter)) {
           try {
            BufferedWriter out=new BufferedWriter(new FileWriter(FilterLogPath, true));
            } catch (IOException e) {
                 System.out.println("Please check the file ");
} catch (FileNotFoundException e) {

Contents of the ErrorDetails file

Using the above methods, you received customized and filtered logs as shown in Figure 3. The ErrorDetails file contains the compilation errors that occurred into the workspace for all the tests performed, including the scrolled entries that are lost by the Rational Functional Tester snapshot.

Figure 3. List of log files generated
List of logs generated

Figure 3 lists numerous files such as IBMbasic_WP61.text, IBMbasic_WP70 stub.text, IBMfaces_WP61.text, and JSRbasic_WP70 stub.text. However only two files have the word _Filter appended to their names. This means that only these two scenarios have exceptions that match the Filter string provided in the code and are of interest for problem determination.

Figure 4 shows the contents of a typical ErrorDetails.text file. From this file, you can determine the compilation errors that are related to a particular test case. In Figure 4 you will see "Error while creating jsrbasicPortlet Targeted on WebSphere portal v6.1." This is a headline passed in method errorDetails() as a string which can be customized or concatenated from multiple parameters used in particular test case and the numeric list following that to the next heading are the problems listed in problems view while executing that test case.

Figure 4. Contents of the ErrorDetails file
Snapshot of ErrorDetails log


The advantages of this approach are:

  • The logs provide detailed information about exceptions and compilation errors to uncover the defects and enhancements.
  • The logs are available as soon as one test scenario is complete and you need not wait for all scenarios to be completed for log analysis. Therefore, problem determination much faster.
  • Exceptions matching the filter string are copied to files, whose names are appended with _Filter in filenames. This enables you to identify and examine relevant logs first, and scan the others later.
  • The logs can be saved and considered for future references of existing defects in a particular build used for a previous test run.



Get products and technologies



developerWorks: Sign in

Required fields are indicated with an asterisk (*).

Need an IBM ID?
Forgot your IBM ID?

Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name

The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.


All information submitted is secure.

Dig deeper into Rational software on developerWorks

ArticleTitle=Capturing and filtering additional logs during scripts execution in IBM Rational Functional Tester