Contents


Understanding the automation test framework of Lotus C API toolkit

Comments

Editor's note: Know a lot about this topic? Want to share your expertise? Participate in the IBM Lotus software wiki program today.

Through study of this article, readers can learn how to develop an automation test framework for their own products and understand how it increases testing performance in an automated way, especially for other API testing teams.

Lotus C API toolkit

The Lotus C API toolkit is a set of subroutines and data structures that allows you to write programs to access IBM Lotus Domino® databases. It is the software development kit (SDK) for Lotus Domino administrators and programmers who want to access Lotus Domino databases programatically. To develop Lotus Domino-based applications with the Lotus C API toolkit you should be familiar with Lotus Domino topics, such as fields, forms, views, categories, and access control lists. You also need to know the C programming language.

To learn more about the Lotus C API toolkit and its programming, refer to Lotus C API Toolkit for Lotus Notes and Domino documentation and the developerWorks® Lotus article, "API programming for Lotus Notes/Domino."

Structure of the Lotus C API toolkit

You can download the Lotus C API toolkit here.

After you download and extract the package, you see the file structure of Lotus C API toolkit shown in figure 1.

Figure 1. Lotus C API toolkit structure
Lotus C API toolkit structure
Lotus C API toolkit structure

The file structure includes the following:

  • notesapi folder. This folder is the root structure of Lotus C API toolkit. It contains subfolders and readme files. Readme.pc, readme.unx, and readme.i5OS under this folder are readme files for Microsoft® Windows®, UNIX®, and system i® operating systems respectively.
  • cmp folder. This folder contains all the cmp files showing the standard compile/link flags for each operating system To develop applications based on Lotus C API toolkit, you should go through such cmp files first to set your own compile/link flags.
  • doc folder. This folder contains Lotus C API toolkit documents including reference and user guide documentation.
  • include folder. This folder contains all the Lotus C API toolkit header files.
  • lib folder. This folder contains the Lotus C API toolkit libraries for each operating system.
  • notesdata folder. This folder contains all databases for the Lotus C API toolkit samples.
  • sample folder. This folder contains sample programs.

Lotus C API toolkit test wrksuite

Lotus C API toolkit test wrksuite is the automation test cases suite for testing Lotus C APIs. Users can add new test cases to this suite to test new exposed features. And they can also remove outdated cases. Up to Lotus Domino 8.0.1, 742 test cases have been developed. The number increases with the delivery of new versions of Lotus Notes® and Domino.

The Lotus C API toolkit test wrksuite structure is shown in figure 2.

Figure 2. Lotus C API toolkit test wrksuite structure
Lotus C API toolkit test wrksuite structure
Lotus C API toolkit test wrksuite structure

The Wrk folder is the whole wrksuite directory. There are three main subfolders in the wrk directory: central folder, common folder, and notes folder.

  • central folder. This folder contains the automation testing script tapi.ksh, a kshell script, and other required files. tapi.ksh is the core of the automation testing framework and controls the build/run and log results for all test cases. It is discussed in detail in the following section.
  • common folder. This folder holds common files for all wrksuite test cases.
  • notes folder. This folder contains all automation test cases for the Lotus C API toolkit. Users can add new cases in this folder.

Automation test framework

The Lotus C API toolkit contains more than 700 test cases, and the number is increasing with the delivery of new releases of Lotus Notes and Domino. Because it is impossible to test cases one by one, the toolkit team designed and developed an automation testing framework to improve efficiency. The framework contains all the automation test cases, and testers can add new cases in a simple way under the framework.

The automation test framework is shown in figure 3. The central control is a kshell script named tapi.ksh. All you the test need to do is to start the script tapi.ksh manually and wait for the test result. tapi.ksh executes the following steps to implement automation testing:

  1. Search all test cases under the test directory and create a test case list file tapi.lst. The test directory can be the notes dir (for testing all cases) or its subdirectory (for testing partial cases). Listing 1 shows partial code of tapi.ksh, used to search test cases and create the list file.
Listing 1. tapi.ksh, partial code for searching test cases and creating the list file
	if [ "$Included" = '' ] ; then
		$ECHO "Searching for tests."
		find . -type d -print | grep $grepsw '[/\\][Uu][Ii]' > $tt
	else
		$ECHO "Running tests from $Included"
		cp $Included $tt
	fi
	sort $tt > $caselist
	if [ "$Exclude" != '' ] ; then
		$ECHO "Tests in $Exclude will not be run"
		sort $Exclude | comm -23 $caselist - > $tt
		mv $tt $caselist
	fi
		rm -f $tt
  1. Find the make file (under the same directory with the source file) for the specified testing platform and to build the case for each test case listed in the list file.
  2. Run each test case.
  3. Log the test result. Two main log files are generated under the test directory. One is suite.log, which logs the running status of all cases. The other is failed.log, which logs all the failed cases. Listing 2 shows some of the code of tapi.ksh that is used for building each case, running each case, and logging its status in the log files.
Listing 2. tapi.ksf, partial code for building cases, running cases, and logging their status
while [ $(cat $caselist | wc -l) -gt 0 ]
do
	i = $(head -n 1 $caselist)
	case_path = "$suite_path/$i"
	cd $case_path
	case_log = "$case_path/$c_log"
	if [ -f $MAKEFILE ] ; then
    $ECHO "$i...\c" | tee -a $log
   total=`expr $total + 1`    

   if [ "$TARGET" = '' ]; then
    status = $?  
    if [ $status -ne 0 ] ; then
      $ECHO "...BUILD failed" | tee -a $log
      $ECHO "$i......BUILD failed" >> $f_log
        if [ -f $k_log ] ; then
        pr -t -o 5 $k_log | tee -a $log | tee -a $f_log
          i
          buildfailures = `expr $buildfailures + 1`
          else
            $ECHO "...BUILD ok\c" | tee -a $log
            rm -f $case_log
            $Make -f $MAKEFILE TEST > $case_log  2>&1
            Status = $?  
            if [ $status -ne 0 ] ; then
           $ECHO "...TEST failed" | tee -a $log
           $ECHO "$i......BUILD ok...TEST failed" >> $f_log
           if [ -f $k_log ] ; then
             pr -t -o 5 $k_log | tee -a $log | tee -a $f_log
             fi
               testfailures = `expr $testfailures + 1`
           else
             $ECHO "...TEST passed" | tee -a $log
             $Make -f $MAKEFILE CLEAN > $case_log  2>&1
             Status = $?
             if [ $status -ne 0 ] ; then
               ECHO "...CLEAN failed" | tee -a $log
               else
               rm -f $case_log
               fi
               if [ -f $k_log ] ; then
               $ECHO "$i......BUILD ok...TEST passed" >> $f_log
               pr -t -o 5 $k_log | tee -a $log | tee -a $f_log
                 fi
                 fi
               fi
           else
           $Make -f $MAKEFILE $TARGET > $case_log  2>&1
           status=$?
           if [ $status -ne 0 ] ; then
             ECHO "...$TARGET failed" | tee -a $log
           else
           ECHO "...$TARGET ok" | tee -a $log
             rm -f $case_log
           fi
        fi
	fi
	tail -n $(($(cat $caselist | wc -l) - 1)) $caselist > 
	tapi.tmp
	mv tapi.tmp $caselist
done

To start automation testing under the proposed framework, testers only need to change the directory to the test directory and execute the tapi.ksh script. Then all the test cases are built and run automatically, and the test results are logged in log files. Testers can check the running results in the log files and decide how to proceed.

Figure 3. Automation test framework
Automation test framework
Automation test framework

Test program structure

To implement automation testing, all test cases follow the unique program structure shown here. NOTE: All the steps below should be followed in sequence.

  1. Initialize Lotus Notes runtime. Initialize Lotus Notes runtime before you run any Lotus C API. Lotus C API NotesInit or NotesInitExtended can initialize the Lotus Notes runtime system for all environments. Lotus C API applications should call NotesInit or NotesInitExtended in their first step before testing any other functions.
  2. Create and prepare the test data. Before testing the desired APIs, first create and prepare test data if needed. For example, if a Lotus C API creates a field in a form, you should first create a Lotus Notes document and open it before testing this API.
  3. Execute testing APIs. Call the desired Lotus C APIs for testing. APIs can be executed many times for testing different combinations of arguments.
  4. Verify the test result. To implement automation testing, verify the APIs' execution result programmatically instead of manually. There are two ways to verify the API execution result: check the result by the expected valueor call in-pair APIs to check.
  5. Test data recovery. For those test cases that create data for testing, applications should remove that data after the test is complete so that the data does not affect any following tests. This removal process is also a key step for automation testing.
  6. Log the test result. Log the test status in the failed.log and suite.log filoes, and log the output information in the output.log file.

Apply to NoteSQL automation testing

As mentioned before, the Lotus C API automation test framework has been applied to other Lotus toolkit products, Lotus C++ API toolkit and NotesSQL. In this section, we discuss NotesSQL automation testing to show how the Lotus C API toolkit automation test framework is applied to other API testing products.

NotesSQL is the Open Database Connectivity (ODBC) driver for Lotus Notes and Domino databases, and it implements ODBC 2.0 or later specification APIs. It allows you to query data from Notes and Domino databases (that is, NSF files). To test NoteSQL programmatically is to test the ODBC 2.0 or later APIs implemented in the NotesSQL driver.

NotesSQL automation testing is controlled by a kshell script, nsqln.ksh (similar to the Lotus C API automation test script, tapi.ksh). NotesSQL developers created 646 automation test cases, and nsqln.ksh is used to build and run all these cases automatically. With the automation testing script, all testers need to do is start the nsqln.ksh script in the kshell environment and wait for the NotesSQL testing result. After nsqln.ksh is finished, the suite.log and problem.log files (similar to the failed.log file of Lotus C API) are created and testers can see the running status of test cases in these log files.

Compared to Lotus C API automation test framework, there are two main differences in NotesSQL:

  • Build and run separation. The NotesSQL automation test script separates the build and running processes and controls the separation by arguments. Therefore, to execute NotesSQL automation testing, users should first build all cases using the nsqln.ksh build command and then run test cases by using nsqln.ksh test. Build and running processes generate separate log files.
  • Running status. There are two running statuses in Lotus C API toolkit testing: pass and fail. NotesSQL adds two more status designations: same as gold file and reconciliation error. In the Lotus C API automation test framework, to validate the API execution result, you can compare the expected value with the returned value from the test APIs. In NotesSQL testing, though, an SQL statement can return a lot of data; the data can make the program complicated if you validate the result by comparing the extracted data with the expected data. Therefore, NotesSQL takes another way to validate the extracted data. It creates a file.gld file to hold the expected data in advance, and the NotesSQL test case creates a data.log file to hold the returned data. In the final step of the NotesSQL test case, it compares the file.gld and data.log files. If all the data matches, it returns the status "same as gold file." Otherwise, it returns a reconciliation error and logs the difference in the diff.log file.

Log file analysis

Log files are created to log the test cases' running status after finishing the wrksuite testing. There are two log files: suite.log and failed.log.

The suite.log file logs the status of all test cases. It contains three parts: log header, log body, and the statistic result.

See tables 1 and 2.

Table 1. The suite.log file details
ItemContentComments
header tapi Version 3.3 starting Thu Feb 28 10:34:40 EST 2008
Suite root is C:/cqe/wrk/notes
This screen output is captured in C:/cqe/wrk/notes/suite.log
Failures are also logged to C:/cqe/wrk/notes/failed.log
Individual case logfile is {case-dir}/case.log
Running nmake -f mswin32.mak -e excl
--- Skipping UI tests.
Header part of suite.log logs the header information such as the tapi version, test case root folder, log files path, and so on.
body
./formula/compend......BUILD ok...TEST passed
./formula/compile......BUILD ok...TEST failed
Body part of suite.log logs each test case running status. There are three status designations for each case: {BUILD ok...TEST passed}, {BUILD ok...TEST failed}, and {BUILD failed}.
statisticDone Thu Feb 28 10:58:12 EST 2008
0 builds failed out of 490.
6 tests failed out of 490.
1 known failures out of a total of 6 failed tests.
5 total unknown failures.
Statistic part logs the overall running result. Users can get the test case statistic information here.
Table 2. The failed.log file details
ItemContentComments
headertapi Version 3.3 starting Thu Feb 28 10:34:40 EST 2008Header part of failed.log logs the header information such as the tapi version and test case execution time.
body ./aaafirst/showinfo......BUILD ok...TEST passed
--------------------------------------------------------
Compiled with Notes C API Release = Notes/Domino 8.0.1
Client test platform = NT 3.x
Client notes build | date = Build V801_01062008|January 06, 2008
Current test user name = CN=Jack Tester/O=toolkit
API test server = LGCToolkit/toolkit
--------------------------------------------------------
./adminp/reqmhier......BUILD ok...TEST failed
see also: SPR#YQR06W6A5F
can not rename user when i call ADMINReqMoveUserInHier
./formula/compile......BUILD ok...TEST failed
Body part of suite.log logs each failed case's running status, and passed case with information output. Testers can get the failure overview and then investigate on the failed cases.

Plug-in testing

Plug-in testing in the proposed automation test framework means that testers can add new components (test cases) to the framework in a plug-in way. New features (new subroutines and data structures) of the Lotus C API toolkit will be issued with the delivery of new versions of Lotus Notes and Domino. All APIs in the toolkit are fully tested before delivery.

Figure 4 shows the plug-in testing in the automation test framework. In Lotus Notes and Domino 7.0.2, a series of Multipurpose Internet Mail Extensions (MIME) APIs is exposed in Lotus C API toolkit. We designed the MIME component, created its subcomponents (mimedir, cd2mime, and so on), and plugged it into the test framework for automation testing. It is shown that the automation test framework provides an efficient for new features testing.

Figure 4. Plug-in testing
Plug-in testing
Plug-in testing

Conclusion

The automation test framework of the Lotus C API toolkit is an efficient approach for API automation testing. This article introduces the design of the automation test framework and its significance. It helps readers learn how to develop an automation API test tool for their own products.


Downloadable resources


Related topics


Comments

Sign in or register to add and subscribe to comments.

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Collaboration
ArticleID=374615
ArticleTitle=Understanding the automation test framework of Lotus C API toolkit
publish-date=06292009