Skip to main content

By clicking Submit, you agree to the developerWorks terms of use.

The first time you sign into developerWorks, a profile is created for you. Select information in your profile (name, country/region, and company) is displayed to the public and will accompany any content you post. You may update your IBM account at any time.

All information submitted is secure.

  • Close [x]

The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerworks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

By clicking Submit, you agree to the developerWorks terms of use.

All information submitted is secure.

  • Close [x]

developerWorks Community:

  • Close [x]

An introduction to runtime analysis with Rational PurifyPlus

Goran Begic, Senior IT Specialist, IBM, Software Group
Goran Begic joined Rational Software in the Netherlands in 1999. Since then, he has worked in technical support, field enablement, product management, and sales for the IBM Rational PurifyPlus family of developer testing tools. He also has expertise in implementing agile development practices. In 1996, he earned a bachelor's of science in electrical engineering from the University of Zagreb.

Summary:  This article discusses runtime analysis in the context of other Rational best practices and outlines its enormous benefits to software developers, testers, and managers.

Date:  19 Nov 2003
Level:  Intermediate

Activity:  17488 views
Comments:  

Editor's note: Some of the features described in this article are no longer supported. An updated version of this article has recently published in The Rational Edge.

There are some people you can never understand, even if you try your best. One moment they're happy and friendly but the next they're moody and angry. And the harder you work to figure out what's going on, the worse it gets.

Sometimes understanding a software application under development can be just as frustrating. One moment it produces seemingly positive results, and the next moment it crashes. But understanding software behavior is not as hard as penetrating the psyche of those moody people. A practice called "runtime analysis" can help.

Runtime analysis is not a new term; it has been in use for years. However, the term and the software development activities behind it have not been clearly defined or explained. This article will attempt to do so by placing runtime analysis in the context of other Rational best practices and outlining its enormous benefits to software developers, testers, and managers.

What Is Runtime Analysis?

Let's begin with a simple definition: Runtime analysis is a practice aimed at understanding software component behavior by using data collected during the execution of the component.

The term itself points to the main elements of this practice:

Runtime. The analysis does not include static ways of analyzing the source code of developed software and relationships between the software's building blocks. Rather, it provides valuable information about how the developed component -- or the whole application -- behaves when it runs, either in the test environment or in the final deployment environment.

Analysis. The activity is designed to provide explanations for various exposed or potential misbehaviors. Users play the most important role, because they will combine their logic, intelligence, and knowledge about software development with available data in order to provide answers to questions about what prevents the application from functioning correctly, or running in a timely fashion, or simply failing in some atypical situations.

Runtime analysis provides understanding of the following aspects of application execution:

  • Execution paths
  • Code coverage
  • Runtime tracing
  • Memory utilization
  • Memory errors and memory leaks in native applications
  • Memory leaks in .NET managed code and Java applications
  • Execution performance
  • Performance bottlenecks
  • Threading problems

Runtime Analysis: An Extension of Debugging

Debugging is a well-known activity, practiced by all software developers on a regular basis. Often we assume that an application will work as long as the source code is written with the correct syntax and the component compiles and links without any errors or warnings. That assumption is wrong!

Even if the compiler didn't report any errors, the application may not be ready to ship to the customer. Typically, when writing code, we test the basic functionality of a component first and make sure that all requirements are satisfied. Later on in the development cycle, QA (Quality Assurance) teams usually test the software. Such QA tests often focus on the functionality of main use-case scenarios. So, if the functionality of all major use-case scenarios is confirmed, then is the application ready to be shipped to the customer? The answer is still no. The application might still crash on some machines, in some combination of scenarios, or in some untested scenarios -- and its performance might not be satisfactory. The task of all development roles should be to minimize the probability of shipping faulty code to the customer, and the best way to do this is to perform tests as early in the development cycle as possible.

It is helpful to think of runtime analysis as an extension of standard debugging tools and methods that can help teams uncover peculiar -- and sometimes very difficult to resolve -- problems.


Data Collection During Development Testing

Data that allows runtime analysis of every detail of application execution is collected while the application is being tested. This testing can be developer testing: The person who implements features runs basic tests of the developed component's functionality -- by running either unit tests or component tests. Software runtime analysis data can also be collected during QA testing of the application. This type of data collection is often referred to as white-box testing, because the goal is to collect information about the (visible) application internals, whereas functional testing without any insight into the application internals is called black-box testing. Testers who perform black-box testing may not be interested in runtime analysis logs and reports, but testers can collect white-box testing data during black-box testing and use white-box information for describing and reporting either functionality or performance problems to developers.

You can increase the quality of the runtime analysis data you collect during testing by using testing automation tools, such as Rational® Robot, which can record testing scenarios and play them back over and over again. You can also analyze the runtime analysis data you collect on the same use cases for different iterations of the developed software components. This will give you a better understanding of not only one iteration of the developed software, but also of the impact of newly introduced changes on product quality. If software quality drops between two consecutive iterations of the component, runtime analysis data makes it very easy to find the responsible feature or code change.


Runtime Analysis in the Software Development Lifecycle

One can argue about the best way to develop software, but I think we can all agree that a methodical approach is more likely to deliver high- quality results than an ad hoc approach without planning or role assignments. And whether you design first and then implement features, write tests before working on the code, or even skip process steps and just start with code, the final result -- the developed component or application -- has to be tested for functionality. It should also be associated with requirements to ensure that the final product matches users' needs. And sooner rather than later, it will require debugging. If you have ever developed software, you know that it can easily run off course. To deliver reliable software, you need to understand exactly how the application executes. This understanding should encompass not only the application's logic, but also its performance and memory considerations.

Requirements First

Requirements often focus on functionality. But, as many who use Rational® RequisitePro® -- Rational's automated requirements management tool -- have discovered, it's also important to establish requirements that ensure the application quality, both internally and from the perspective of your customer application. For example, two such requirements might be stated as follows:

  • The server component should use the same amount of memory before and after each client session.

And

  • The memory used by the application before and after use-case scenario #13 should be the same.

Both of these requirements sound logical, don't they? But I have seen applications in use for years that don't meet either one. This might be due to a memory leak, which can seriously damage the final product's quality and the vendor's reputation. In some extreme cases (e.g., a memory leak in the server side component), it can even cause the application -- or the whole system -- to crash. Fortunately, you can use runtime analysis to detect memory leaks during development, so that you can meet these requirements and deliver a high-quality product.

Here is another example of a vital quality requirement, this time for a Web application:

  • The response time of the component for use-case scenario #91 must be less than or equal to five seconds.

Again, runtime analysis can help ensure that you meet this requirement. (And if you do not, the user may well start looking for the information on some other Web site.) And finally, here is an example of a vital requirement for testing:

  • The software is not ready for release if the QA team doesn't test more than 60 percent of the available source code base.

Again, this requirement may sound trivial, but think about it. When you released products in the past, did you really know how much code in your application was tested and how much was left for users to "test" in their daily work? With runtime analysis, you can ensure that the source code is thoroughly tested.

Software Modeling

Personally, I like to dive into coding as soon as possible and work on a small team rather than in a large development group. However, I find that at some point in my own software development process, I have to start documenting the most important scenarios and interfaces in my application as well as dependencies in legacy code that I'm reusing, and so forth.

To do this, I could use a piece of paper, or maybe a shiny new tablet PC, and start drawing class and sequence diagrams with my own symbols. But if it is a serious development project, I can get help from an automated modeling tool such as Rational Rose®, Rational Rose® RealTime, or Rational® XDE™. Using the Unified Modeling Language (UML), I can create models that are easily understandable not just to me, but also to my colleagues and managers and the final users. I can use these models to define roles in my development team and document the application that will meet the requirements given to my group. I can also update my models from the code on a need- to-have basis.

So where does software modeling meet runtime analysis? Theoretically, I could imagine several intersections, but let's stay with one common problem that runtime analysis solves.

Creating sequence diagrams for existing code can be tiresome. To do it effectively, you need to understand exactly how your application should execute -- and this is where runtime analysis can really help. Rational® PurifyPlus for Linux and the Rational® Test RealTime family of products provide unique capabilities for creating UML sequence diagrams "on the fly," using runtime analysis information collected during testing (see Figure 1).

UML Sequence Diagram Created by Rational PurifyPlus for Linux
Figure 1: UML Sequence Diagram Created by Rational PurifyPlus for Linux
(click here to enlarge)

The benefits of this particular feature -- runtime tracing -- are obvious during debugging: It lets you visualize objects, method calls, and raised exceptions as you do development testing in a debugger.

Writing Code: Implementation and Debugging

At some point in the development lifecycle, the projected features need to be implemented into code; this code, in turn, needs to be compiled and linked into either a component that will be executed with the help of unit testing, or a debug version of the standalone application. Start with basic functionality that works, and start adding features in order to avoid functionality flaws later in the process. The same applies for performance and memory utilization problems: The earlier you detect performance and memory bottlenecks, the easier it will be to fix them and to deliver software of higher quality. If you write tests even before you implement the code, you can define verification points not just for the component's functionality, but also for its performance and memory usage. This is where runtime analysis comes into play. Since you have to test the functionality anyway, runtime analysis will provide you exact information about the root cause of the problem, based on the information collected during functionality testing.

All major programming languages host features that provide collections of additional information about the application's execution. Using programming language features such as assert and trace, and keywords for exception handling, can help inform you about what has happened during application execution. The system APIs for timing can also help you gain information about performance, but as a data collection vehicle they can quickly become very ineffective, and they influence the performance of the tested application too much to be reliable.

In the code sample below, for example, assert will confirm or decline the assumption that you've made in the code about the result of a certain operation.

FILE* p = fopen("WordDoc.doc"); assert( p );

If the opening of this file fails for some reason, then the value returned by ">fopen( )will be 0. If you don't assert this return value, the application could continue executing, and you might never find out whether the file opening operation actually succeeded.

Note that in this example, assert is only checking your assumption about the correct functioning of a certain method. In more complicated situations you can easily either lose sight of a certain scenario or make the wrong assumption, which would result in a runtime error.

The advantage of this debugging method is obvious, but there are some disadvantages as well:

  • You can't assert every single condition in your code. That would make the code extremely slow and difficult to maintain.
  • Numerous other errors can occur without any visible effects on the code execution; even if you combine all of the language capabilities we've mentioned, it may not be enough.
  • Sometimes the root cause of the error occurs much earlier in the application execution, and it is difficult to trace back to the problem.

As we mentioned earlier, it is also possible to measure application performance from within the code. Here is an example:

time = System.currentTimeMillis(); DoSomething(); time = System.currentTimeMillis() - time; System.out.prntln("Measured time is " + time);

However, this profiling method doesn't allow you to profile the whole application and still discover details about each method, not to mention specific lines of code. The collected time is also not reliable because it may be influenced by other processes running on the same machine -- user interaction and so forth. For more detailed, reliable performance profiling, you need a specialized performance profiling tool like Rational® Quantify® or Rational PurifyPlus.

Another basic debugging tool is a debugger including Visual Studio Debugger and GNU gdb. A debugger allows you to stop the execution of an application at virtually any line of code. Debuggers can also replace the machine instruction on the line of code where you've set the breakpoint with a special instruction that will "freeze" execution of the application in the processor and allow you to examine the content of objects, variables, function stacks, and registries at that point of application execution. However, the debugger will not tell you whether you have a memory or performance problem. It will assist you in finding one if you have a hunch that it exists. A specialized runtime analysis tool such as Rational®Purify®or Rational PurifyPlus, on the other hand, will record every memory error -- with all the details -- as the error happens. It will put the breakpoints at the exact place where a memory violation happens, or it will allow you to examine the application internals after the run via the recorded runtime analysis data. Runtime analysis removes the guesswork from debugging!

Advanced Debugging with Runtime Analysis

The major goals of debugging are to find the root cause of defects and understand application behavior.

Runtime analysis provides additional capabilities that supplement traditional debugging:

  • Visualization of application execution.
  • Measurement of vital runtime parameters, including memory usage, performance, and code coverage.
  • Error detection in user code.
  • Documentation of runtime behavior.

We will examine these capabilities below.

Visualization of Application Execution

To understand this capability, we'll look at five examples.

Visualization Example 1: Runtime Tracing

First, let's see how a runtime analysis tool (Rational PurifyPlus for Linux does runtime tracing) visually represents important runtime elements of the tested application. As Figure 2 shows, this capability means users can step through the code and see the interactions between objects at the same time.

Runtime Tracing with Rational PurifyPlus for Linux
Figure 2: Runtime Tracing with Rational PurifyPlus for Linux
(click here to enlarge)

Visualization Example 2: Code Coverage

Runtime analysis with a tool such as Rational® PureCoverage® (included in Rational PurifyPlus) provides various views to code coverage information, one of them being Annotated Source. This particular view shows the source file of the examined application; the color of the line indicates the line's status after the executed test case: hit, missed, dead, or partially hit.

As Figure 3 shows, the user can see code coverage and the execution path for this test case.

Rational PurifyPlus Display of Annotated Source for the C#.NET application in Visual Studio.NET
Figure 3: Rational PurifyPlus Display of Annotated Source for the C#.NET application in Visual Studio.NET
(click here to enlarge)

The code fragment in Figure 3 shows the exact path the application took when executing the switch statement on line 111. This particular line is marked as partially hit because line 122 hasn't been executed.

Visualization Example 3: Threads

A runtime analysis tool such as Rational Quantify (included in Rational PurifyPlus) provides thread visualization, which can assist in detecting multithreading problems by marking the state of each of the threads while debugging. As Figure 4 shows, this allows you to examine the status of threads visually, while debugging.

Rational Quantify Thread Analysis View in Visual Studio 6
Figure 4: Rational Quantify Thread Analysis View in Visual Studio 6
(click here to enlarge)

Visualization Example 4: Call Graph

Runtime analysis tools can also detect and display performance bottlenecks. The big advantage of this approach, compared to traditional methods, is that you can get an excellent overview of the execution path as well as precise information about the number of calls to the methods involved in the scenario. As Figures 5A and 5B show, the Call Graph in Rational Quantify highlights a chain of calls in the most time-consuming execution path; that is the performance hotspot. The thickness of the line connecting methods is proportional to the ratio between the time (or memory if you are using Purify) spent in this chain of calls and the rest of the application.

Rational Quantify Call Graph of a Mixed VB.NET and C#.NET Application in Visual Studio.NET
Figure 5A: Rational Quantify Call Graph of a Mixed VB.NET and C#.NET Application in Visual Studio.NET
(click here to enlarge)

Rational Quantify Call Graph of a C/C++ Application on Solaris
Figure 5B: Rational Quantify Call Graph of a C/C++ Application on Solaris
(click here to enlarge)

Visualization Example 5: Memory Usage

The first step in handling memory leaks is to detect them. One very intuitive way to do this is to visualize overall memory usage and take snapshots of memory in the program under test (PUT). This lets you see potential memory leaks in the running application. (This feature is available in Rational Purify for Java and .NET managed applications.) For example, if snapshots of memory usage for the component running on the server show that overall memory usage increases after each client session, then it is very likely that this component leaks memory (see Figure 6).

Overview of Thread Status and Memory Usage in Rational Purify for Windows
Figure 6: Overview of Thread Status and Memory Usage in Rational Purify for Windows
(click here to enlarge)

Measurement of Vital Runtime Parameters

Visual error detection is just the first stage of runtime analysis. We also need to understand exactly what happens during the run. For that purpose, runtime analysis should be based on exact measurements of parameters vital for the application's execution:

  • Runtime performance
  • Memory usage
  • Code coverage

Again, we will look at examples to understand this runtime analysis capability.

Measurement Example 1: Function List View

Function List View is a typical runtime analysis view that can be generated with a specialized Runtime analysis tool such as Rational Quantify (see Figure 7). It presents all important methods and/or objects of an application in tables that can be sorted by number of measured parameters; this allows developers analyzing code to find what methods used the most available memory at that point in time, as well as the slowest functions, the age of objects, and so forth.

This view provides exact information about the number of calls to methods, time spent in methods only, time spent and memory accumulated in selected methods and all their descendants, and so on.

Rational Quantify Function List View for a Visual C++ Application
Figure 7: Rational Quantify Function List View for a Visual C++ Application
(click here to enlarge)

Measurement Example 2: Function Detail View

A runtime analysis tool such as Rational Quantify can also extend the information in Measurement Example 1 to include information about the distribution of measured data between calling methods and descendants. This is shown in the Function Detail View (Figure 8). This view highlights callers and descendants that contribute to a performance or memory hotspot -- information that can help detect the exact cause of a performance or memory bottleneck.

Rational Quantify Function Detail View for a Visual C#.NET Application in Visual Studio.NET (with Rational XDE)
Figure 8: Rational Quantify Function Detail View for a Visual C#.NET Application in Visual Studio.NET (with Rational XDE)
(click here to enlarge)

Measurement Example 3: Method Coverage Module View

As we explained earlier, in some cases -- and especially when assessing the value of available testing methods -- it is useful to measure the percentage of code covered while testing, or simply to mark all the methods that haven't been tested after a series of tests. You can do this with a tool such as Rational® PureCoverage®, which yields precise information about untested and dead code vs. tested code (Figure 9).

Rational PureCoverage Display of Code Coverage on the Method Level for a Mixed C#.NET and VB.NET Application in Visual Studio.NET (with Rational XDE)
Figure 9: Rational PureCoverage Display of Code Coverage on the Method Level for a Mixed C#.NET and VB.NET Application in Visual Studio.NET (with Rational XDE)
(click here to enlarge)

Runtime Memory Corruption Detection in User Code

This is the crowning glory of runtime analysis for native C/C++ applications. Runtime analysis can not only help to detect problems by displaying performance, memory, thread, and code coverage data in different views, but it can also pinpoint the exact location in the user code where the error is generated and/or caused. Runtime memory corruption detection is essential to ensure proper functioning and high quality of native C and C++ applications on all platforms. Rational tools for runtime memory detection are Rational Purify and Rational PurifyPlus. Again, let's look at some examples.

Error Detection Example 1: Rational Purify Memory Error and Memory Leak Reports

Rational Purify can pinpoint the exact line of code where a developer has created a memory error. It doesn't even need source files to provide this information; Rational Purify detects errors in memory and uses debug information to trace these errors back to the responsible lines of code (see Figure 10).

Rational Purify Memory Error and Memory Leak Report for a Visual C++ Application
Figure 10: Rational Purify Memory Error and Memory Leak Report for a Visual C++ Application
(click here to enlarge)

In this particular example, the developer forgot to take the termination string into consideration when building an array variable. This error was causing the release build of the application to crash, whereas the debug build worked fine. This example is just one of the many ways in which runtime analysis significantly reduces debugging time for C/C++ development.

Error Detection Example 2: Quantify Annotated Source

Rational Quantify has a unique capability to measure distribution of time recorded for each of the user methods per line of code. Quantify annotated source displays times measured for each line of code, along with times spent and inside functions called on the line. This information can help you narrow the performance bottleneck down to an individual line of code (Figure 11).

Rational Quantify Annotated Source for a Mixed Visual Basic 6 and Visual C++ Application in Visual Studio 6
Figure 11: Rational Quantify Annotated Source for a Mixed Visual Basic 6 and Visual C++ Application in Visual Studio 6
(click here to enlarge)

Error Detection Example 3: Purify Object and Reference Graph

In Java and .NET managed code, it is not possible to make runtime memory errors such as out of bounds reads and writes and free memory reads and writes, because the automatic memory management in the runtime subsystem prevents developers from directly accessing allocated memory. However, this automated memory management doesn't prevent programmers from forgetting references to the objects' allocated memory. As long as there is a reference to such dynamically allocated objects somewhere in the code, they will stay in memory and will not be cleaned by the automatic memory management (garbage collector). The net effect of such errors is the same as the effect of C/C++ leaks: The memory becomes unavailable for this and all other processes running on the host operating system. By doing a runtime analysis with Rational Purify, however, you can pinpoint the exact line of code where the reference to the object in question has been created (Figure 12).

Rational Purify Object and Reference Graph for a Java Application
Figure 12: Rational Purify Object and Reference Graph for a Java Application
(click here to enlarge)

Documentation of Runtime Behavior

Yet another way to leverage runtime analysis is by documenting the application's runtime behavior for future use. This helps you assess the overall quality of the project and measure the influence of newly introduced features and code changes on overall application performance, reliability, and test harness completeness. This advanced way of practicing runtime analysis involves collecting runtime data for each iteration of the component or application under development and analyzing the data at different stages in the project lifecycle. This information can help in determining overall project quality as well as the effect of new feature additions and bug fixes on overall quality.

When you use runtime analysis data together with source control tools such as Rational® ClearCase®, you can easily detect which changes to the source code database are responsible for a faulty build and/or failure of automated tests; you will know which portions of the source code base were changed between the successful set of tests and the set of tests that failed. Not only that: You can identify the owner of those code changes and the exact times and dates they were introduced.

Advanced runtime analysis tools such as Rational PurifyPlus provide features to analyze multiple test runs by, for example, allowing the user to merge code coverage data from various tests or test harnesses, or to create separate data sets for comparisons of consecutive iterations of test measurements, as shown in Figure 13.

Rational Quantify Compare Runs Report
Figure 13: Rational Quantify Compare Runs Report
(click here to enlarge)

In Figure 13, Rational Quantify compares two data sets and highlights chains of calls where performance has improved (green line) and chains of calls where performance has dropped (red line). The calculated data is available in both the Call Graph view and in the more detailed Function List view.

Even if you are not in a position to create an automated test environment, you can still automate data analysis by taking advantage of runtime analysis data saved as ASCII files. Figure 14 shows an example of a performance profile imported into Microsoft Excel.

Rational Quantify Performance Report Imported into Excel
Figure 14: Rational Quantify Performance Report Imported into Excel
(click here to enlarge)

You can easily automate data analysis in Excel by creating simple Visual Basic applications, or with any of the popular scripting languages: Perl, WSH, JavaScript, and so on. PurifyPlus for UNIX comes with a set of scripts that can help you manage and analyze data collected from various tests.

Runtime Analysis: The Emphasis Is on Quality

Runtime analysis expands standard software development activities along one key dimension: concern for quality. It paves the way for achieving higher software quality through better understanding of the internal workings of an application under development. Remember: Source code that compiles is not proof of quality; detailed, reliable, and precise runtime performance, memory utilization, and thread and code coverage analysis data are the only way to determine that an application is free of serious errors and will perform efficiently.

Click here to view a PDF version of this article.

Editor's Note: This article originally appeared in The Rational Edge.


About the author

Goran Begic joined Rational Software in the Netherlands in 1999. Since then, he has worked in technical support, field enablement, product management, and sales for the IBM Rational PurifyPlus family of developer testing tools. He also has expertise in implementing agile development practices. In 1996, he earned a bachelor's of science in electrical engineering from the University of Zagreb.

Report abuse help

Report abuse

Thank you. This entry has been flagged for moderator attention.


Report abuse help

Report abuse

Report abuse submission failed. Please try again later.


developerWorks: Sign in


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Select information in your profile (name, country/region, and company) is displayed to the public and will accompany any content you post. You may update your IBM account at any time.

Choose your display name

The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


Rate this article

Comments

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Rational
ArticleID=957
ArticleTitle=An introduction to runtime analysis with Rational PurifyPlus
publish-date=11192003
author1-email=gbegic@us.ibm.com
author1-email-cc=