AOP@Work: Unit test your aspects

Eight new patterns for verifying crosscutting behavior

AOP makes it easier than it's ever been to write tests specific to your application's crosscutting concerns. Find out why and how to do it, as Nicholas Lesiecki introduces you to the benefits of testing aspect-oriented code and presents a catalog of patterns for testing crosscutting behavior in AspectJ.

Share:

Nicholas Lesiecki (ndlesiecki@apache.org), Software engineer/Programming instructor, Google

Nicholas Lesiecki is a recognized expert on AOP in the Java language. In addition to coauthoring Mastering AspectJ (Wiley, 2003), Nick is a member of AspectMentor, a consortium of experts in aspect-oriented software development. He has spoken about applying AspectJ to testing, design patterns, and real-world business problems in such venues as SD West, OOPSLA, AOSD, and the No Fluff Just Stuff symposium series. He currently serves Google as a Software Engineer and Programming Instructor.



01 November 2005

The widespread adoption of programmer testing over the past five years has been driven by the demonstrable productivity and quality of the resulting code. Prior to the advent of aspect-oriented programming (AOP), however, it was difficult to write certain kinds of tests for crosscutting behavior such as security, transaction management, or persistence. Why? Because this behavior was not well modularized. It's difficult to write a unit test if there's no unit to test. With the popularization of AOP, it has become both possible and desirable to write tests that check crosscutting concerns independent of their realization in a target system.

In this article, I introduce a catalog of techniques for testing crosscutting behavior implemented with aspects. I focus on unit tests for aspects, but I also present other patterns that can help you to build confidence in your aspect-oriented applications. As you'll quickly discover, testing aspects involves many of the same skills and concepts as testing objects, with many of the same practical and design benefits.

I've written this article based on my experiences developing in AspectJ. Many of the concepts should be portable to other AOP implementations, but some are language specific. See Download to download the source code for the article; see Resources to download AspectJ and the AJDT, which you will need to follow the examples.

About this series

The AOP@Work series is intended for developers who have some background in aspect-oriented programming and want to expand or deepen what they know. As with most developerWorks articles, the series is highly practical: you can expect to come away from every article with new knowledge that you can put immediately to use.

Each of the authors contributing to the series has been selected for his leadership or expertise in aspect-oriented programming. Many of the authors are contributors to the projects or tools covered in the series. Each article is subjected to a peer review to ensure the fairness and accuracy of the views expressed.

Please contact the authors individually with comments or questions about their articles. To comment on the series as a whole, you may contact series lead Nicholas Lesiecki. See Resources for more background on AOP.

Unit testing aspected code

A good automated test suite for an application should look like the diagram in Figure 1: Isolated tests for individual classes form a broad base that gives lots of test coverage and rapid failure isolation. On top of those sit integration and end-to-end system tests, which verify that the units work in concert. Together, these layers (if they're well constructed and frequently run) can boost your confidence in the behavior of an application.

The unit tests at the base of the pyramid are important for several reasons. First, they help you to stimulate corner cases that may be difficult or tedious to reproduce in an integration test. Second, because they involve less code, they often run faster (and thus you're likely to run them more frequently). Third, they help you think through the interface and requirements of each unit. Good unit tests encourage loose coupling between units, a requirement to get the test running in a test harness.

Figure 1. Layered tests
Illustration of a layered unit test

But what about crosscutting behavior? Imagine a customer requirement: "Check the caller's security credentials before executing any operation on the ATM class." Certainly you could (and should) write an integration test for that requirement. However, non-aspect-oriented development environments make it difficult to write a unit test or otherwise isolate the behavior of "checking security before an operation." This is because the behavior diffuses into the target system and is difficult both for humans to pin down and for tools to analyze. If you develop with aspects, however, you could represent such behavior as advice, applied to any operation that matches a certain pointcut. Now the behavior has first-class representation as a unit, and you can test it in isolation or visualize it using your IDE.


Where aspected code fails

Before diving into a catalog of techniques for unit testing aspects, I should briefly discuss types of failure. Crosscutting behavior breaks down into two major components: what the behavior does (I'll call this the crosscutting functionality) and where the behavior applies (I'll call this the crosscutting specification). To return to the ATM example, the crosscutting functionality checks the caller's security credentials. The crosscutting specification applies that check at every public method on the ATM class.

For real confidence in your implementation, you need to check both the functionality and the specification (or, loosely speaking, the advice and the pointcut). As I proceed with the examples, I'll highlight whether a given test pattern verifies the crosscutting functionality, the specification, or both.

Note that I will focus on testing pointcuts, advice, and the code that supports them. Intertype declarations (and other aspect features) are certainly testable. Some of the techniques I present in this article could be applied to them with minor changes. They also have their own family of techniques, many of which are straightforward. In the interest of saving space, however, I decided not to cover them explicitly in this article.


A catalog of test patterns

I've structured this article as a catalog of patterns for testing aspect-oriented code. For each pattern, I describe which failure types it applies to, summarize the pattern, provide an example, and discuss the benefits and drawbacks of the pattern. The catalog is divided into four sections:

  • Testing integrated units: This section presents a pattern for testing a piece of an integrated system (in other words, testing both your aspects and non-aspect classes together). This technique is the only way to gain confidence in crosscutting behavior if you don't use aspects and remains a critical tool when you do use them.
  • Using visual tools: The two patterns described here leverage AspectJ's IDE support for Eclipse, also known as AJDT. Using visual tools to inspect your application's crosscutting structure is not a testing technique, strictly speaking. However, it will help you to understand and gain confidence in your application's crosscutting concerns.
  • Using delegation: This section demonstrates two patterns that help you tease apart the two failure types previously mentioned. By factoring some logic out of your advice and into a helper class (or method), you can write tests that check your application's crosscutting behavior independent of its crosscutting specification.
  • Using mock targets: This final section includes three patterns introducing "mock targets," classes that mimic real advice targets and allow you to test the both join point matching and advice behavior without integrating your aspect it into a real target.

The Highlighter aspect

To demonstrate the patterns in the catalog, I use an aspect that implements search-term highlighting (that is, highlighting a user's query terms in the search results). I implemented an aspect very similar to the one I present here at a previous job. Our system had to highlight terms on the results summary page, the detail page, and a number of other places in the application. The fact that it affected so many places made the behavior an ideal candidate for an aspect. The one I present in this article only crosscuts one class, but the principles are the same. Listing 1 contains one implementation of the Highlighter aspect:

Listing 1. Highlighter defines highlighting behavior
public aspect Highlighter{

  /* ITDs to manage highlighted words */
  private Collection<String> Highlightable.highlightedWords;
  
  public Collection<String> Highlightable.getHighlightedWords() {
    return highlightedWords;
  }
  public void Highlightable.setHighlightedWords(Collection<String> 
    highlightedWords){
    this.highlightedWords = highlightedWords;
  }
  
  public pointcut highlightedTextProperties() :
    ( 
      execution(public String getProduct())
    || execution(public String getTitle())
    || execution(public String getSummary())
    );

	
  String around(Highlightable highlightable) : 
    highlightedTextProperties() && this(highlightable) 
  {
    String highlighted = proceed(highlightable);
    for (String word : highlightable.getHighlightedWords()) {
      Pattern pattern = patternForWord(word);
      Matcher matcher = pattern.matcher(highlighted);
      highlighted = matcher.replaceAll("<span class=
        \"bold\">$0</span>");
    }
		return highlighted;
  }
  
 private Pattern patternForWord(String word) {
  return Pattern.compile("\\b\\Q" + word + "\\E\\b",
   Pattern.CASE_INSENSITIVE);
 }  
}

The Highlighter aspect captures the return value of a join point and replaces it with a highlighted version of the same. It chooses which words to highlight based on a collection of highlighted words stored in an intertype field aboard the Highlightable interface. You can apply the Highlightable interface to any classes that need to participate in the highlighting behavior, either in the class declaration or using a declare parents statement.

I chose a very simple pointcut for the initial version of the example. Later in the article, I rewrite the pointcut as I demonstrate some of the testing patterns.


I. Testing integrated units

Addresses: Crosscutting functionality and specification

Summary: As I explained in the introduction, aspects submit easily to integration tests. This pattern is very simple: write a test against your system as you would if the behavior were not implemented with aspects. In other words, put objects together, set up state, call methods, and verify the results. The key is to write a test that will fail if the aspect misbehaves or does not apply to the join points you intend it to. If you want the aspect to affect many join points, pick a few representative examples.

Example: An integration test for the Highlighter

In Listing 2, the thing to note is that this test operates just like a test for an application without aspects would. It puts objects together, sets up state, calls methods, and verifies the results.

Listing 2. An integration test for the Highlighter
public class HighlightSearchResultsIntegrationTest extends TestCase {
  Collection<String> words;

  private SearchResult result;

  public void setUp() throws Exception {
    super.setUp();
    words = new ArrayList<String>();
    words.add("big");
    words.add("grrr");

    result = new SearchResult();
    result.setTitle("I am a big bear!");
    result.setSummary("grrr growl!");
    result.setHighlightedWords(words);
  }

  public void testHighlighting() {
    String expected = "I am a <span class=\"bold\">big</span> bear!";
    assertEquals(expected, result.getTitle());
    expected = "<span class=\"bold\">grrr</span> growl!";
    assertEquals(expected, result.getSummary());
  }
}

Benefits and drawbacks

Integration tests have similar costs and benefits whether or not you are using AOP. In either case, the key benefit is that you are verifying the high-level intent of your code (in other words, that the title and summary are highlighted appropriately). This helps when you perform major refactoring. It also drives out bugs that only show up when components interact.

Relying only on integration tests does lead to a number of problems, however. If the HighlightSearchResultsIntegrationTest failed, it could be because the aspect failed to run at all, because the advice logic had a bug, or because of the other involved classes (like the SearchResult). In fact, I encountered this exact situation while developing the code for the integration test example. I spent 20 minutes trying to understand why my aspect wasn't running, only to discover that I had an obscure problem with my regular expression!

Integration tests also require more complicated set up and assertions. This makes them harder to write than tests that isolate a single aspect. It's also hard to use integration tests to stimulate all of the edge cases that your code should handle properly.

Behavior that crosscuts a number of classes poses a particular problem for integration tests. Let's say that you wanted consistent exception handling for all of the classes in your application. You wouldn't want to test every class for this new behavior. Rather, you would want to select a representative example. But if you picked a specific domain class (say the Customer class) and tested the error handling aspect against it, you would risk muddying the intent of your test. Would the test verify Customer's behavior, or the application's error handling?


II. Using visual tools

One of the hard things about testing a widespread crosscutting concern is that it can advise so many join points. Executing and checking all the matches can be a real pain. (And testing for the reverse -- the accidental inclusion of an unintended join point -- is even harder.) Accordingly, the next two patterns show the benefits of supplementing normal tests with manual inspection of the crosscutting views available in tool such as AJDT. (The combination of AspectJ and AJDT provides the most visualization support as of this writing; however, other combinations, such as JBoss AOP and the JBoss IDE provide good visualization tools as well.)

Pattern 1. Inspect crosscutting visually

Addresses: Crosscutting specification

Summary: Use the AJDT's cross-references view as you develop your aspect to see which join points it is likely to advise. Verify manually that the list is complete and does not include join points that should be omitted.

Example: Identifying an unwanted match

Let's say you want to highlight the title, product, and summary of your search results. Rather than enumerating each method as I did in Listing 1, you write what you hope will be a more robust pointcut. (For more on the art of the robust pointcut, see Adrian Colyer's blog entry in Resources.) The following pointcut seems to capture the intent of the original:

public pointcut highlightedTextProperties() :
( 
   execution(public String get*())
   && ! execution(public * Highlightable.*(..))
);

When you inspect the pointcut using the AJDT's cross-references view, however, you see what's shown in Figure 2:

Figure 2. Four advised join points in the AJDT cross-references view
Four advised joinpoints shown in the AJDT cross-references view

Notice that there is an extra match: SearchResult.getWebsite(). You know that the Website is not supposed to be highlighted, so you rewrite the pointcut to exclude that unintended match.

Benefits and drawbacks

Using AJDT's cross-references view to inspect crosscutting specifications has three major advantages. First, the cross-references view gives you instant feedback as you develop your aspects. Second, it lets you easily detect consequences that would be difficult to test for. (To write a test that verified that getWebsite() was not highlighted, you would need to either guess that getWebsite() was a likely source of error or check every String getter on SearchResult. The more unlikely the error, the harder it is to test against it preemptively.) Third, the automatically generated view can verify positive cases that would be tedious to verify in code. For example, if the search highlighter were to affect 20 join points by design, inspecting the cross-references view would be easier than writing a test for each join point.

The main drawback of using views for verification is that inspection cannot be automated. It requires programmer discipline. A hurried programmer could inspect Figure 2 and not catch the bug. (The next pattern presents a partial solution to this problem.) Another problem is that the crosscutting views only show matches based on static join point shadows. In other words, if you have a pointcut that relies on runtime checks, such as cflow() or if(), the cross-references view they cannot say for sure that the join point will match at run time, only that it is likely to.

Pattern 2. Inspect changes with crosscutting comparison tools

Addresses: Crosscutting specification

Summary: Use the crosscutting comparison feature of AJDT to save a crosscutting map of your project before a refactoring or another code change. Save another map after you complete the change. (You could also save a map nightly to compare against.) Compare the maps in the crosscutting comparison tool to detect any unwanted changes to the join points affected by your aspects. Note that as of this writing, only AJDT provides a crosscutting comparison tool.

Example: Rewriting a pointcut

Let's say to correct the problem shown in the previous example, you've decided to change the pointcut to use Java 5 annotations, as shown here:

public pointcut highlightedTextProperties() :
       execution(@Highlighted public String Highlightable+.*())

You then add the annotation to the source at appropriate places, for example:

  @Highlighted
  public String getTitle() {
    return title;
  }

Your next step is to compare the snapshot of the project taken before the change with the one after the change and get the result shown in Figure 3. As you can see, the refactoring removed the advice match on getWebsite(), but also the match on getSummary(). (It looks as if you failed to add an annotation.)

Figure 3. Results of a change shown in the crosscutting comparison tool
Results of a change shown in the AJDT's crossutting comparison tool

Benefits and drawbacks

This technique is really a refinement of the previous technique. By only showing the changes, the crosscutting comparison tool can help prevent information blindness. Also, whereas the cross-references view requires that you select advice or a class that you wish to analyze, the crosscutting comparison tool lets you inspect changes from your entire project.

On the downside, the crosscutting comparison view can degrade if an aspect affects many join points. Consider an aspect that logs all public methods. Such an aspect would add dozens of new changes to the crosscutting view after even a day's worth of development, making it difficult to see other, more important changes. In an ideal world, the crosscutting comparison tool would be highly configurable, issuing warnings for changes to certain aspects and ignoring changes related to other aspects.


III. Using delegation

Aspects can and often do implement their crosscutting behavior using ordinary objects. You can leverage this separation of concerns to test their behavior separately from the crosscutting specification. The next two patterns illustrate how to employ delegation and mock objects to check both aspects of your aspect (pun intended).

Pattern 1. Test delegated advice logic

Addresses: Crosscutting functionality

Summary: If you have not already done so, delegate some or all of your advice logic to another class that you can test directly. (You can also delegate the behavior to a public method on the aspect if you choose.)

Example: Move the highlighting logic to another class

To better test the highlighting logic in isolation, you move it into a dedicated utility class:

  private HighlightUtil highlightUtil = new CssHighlightUtil();
  
  public void setHighlightUtil(HighlightUtil highlightUtil){
  this.highlightUtil = highlightUtil;
  }
  
  String around(Highlightable highlightable) : 
      highlightedTextProperties() && this(highlightable) 
  {
    String result = proceed(highlightable);
    return highlightUtil.highlight(result, highlightable.getHighlightedWords());
  }

By extracting the highlighting logic, you can write unit tests for it by calling methods on the HighlightUtil class.

Benefits and drawbacks

This technique makes it easier to stimulate edge cases in your domain logic. It also helps to isolate bugs; if the test for the helper fails, you know that it, not the aspect, is to blame. Finally, delegating logic often leads to a cleaner separation of concerns. In the example, by extracting it to another class, text highlighting becomes an operation that other parts of the system can use independently of this aspect. In turn, the aspect gains the flexibility to use alternate highlighting strategies (CSS highlighting for HTML, all-caps highlighting for plain text).

On the negative side, this technique doesn't work when the logic is difficult to extract. For example, it may be best to leave simple logic inlined. Also, some aspects store state, either locally or in ITDs on the classes they advise. State storage often forms a significant part of the logic of the aspect, and it can't always be moved cleanly into a helper.

Pattern 2. Use mock objects to record advice triggering

Addresses: Crosscutting specification and functionality

Summary: This technique naturally complements the previous one. If you have extracted advice behavior to another class, you can substitute a mock object for your helper object and verify that the advice triggers at the right join points. You can also verify that the advice passes the correct context to the helper, either directly from the advice parameters or from previously stored state.

Note: If you need an introduction to mock objects, see Resources.

Example: Using a mock HighlightUtil to test the Highlighting aspect

You've already seen how the aspect delegates to another class to handle the actual text highlighting. This paves the way for injecting a different implementation of the highlighter into the aspect during the test. The code in Listing 3 does this by leveraging the JMock library. (See Resources.)

Listing 3. Using JMock to test calls from the aspect
public class DelegatedHighlightingUnitTest extends MockObjectTestCase {

  Collection<String> words;
  private HighlightUtil original;
  private SearchResult result;
  private Mock mockUtil;

  public void setUp() throws Exception {
    super.setUp();

    setUpMockHighlightUtil();

    words = Collections.singleton("big");

    result = new SearchResult();
    result.setTitle("I am a big bear!");
    result.setHighlightedWords(words);
  }

  private void setUpMockHighlightUtil() {
    original = HighlightResults.aspectOf().getHighlightUtil();
    mockUtil = mock(HighlightUtil.class);
    HighlightResults.aspectOf().setHighlightUtil((HighlightUtil)mockUtil.proxy());
  }


  public void testHighlightUtilAppliedToTitleOfSearchResult() {
    mockUtil.expects(once())
      .method("highlight")
      .with(eq("I am a big bear!"), eq(words));
    result.getTitle();
  }
}

The setUp() method instantiates the mock object and injects it into the aspect. The test method tells the mock to expect a call to a method with the name "highlight" taking two arguments: the return value from getTitle() and the words list stored on the SearchResult. Once the expectation is set, the test calls the getTitle() method, which should trigger the aspect and result in the expected call to the mock. If the mock does not receive the call, it will fail the test automatically during tear down.

Note the setUp() method stores a reference to the original HighlightUtil. That's because the aspect, like most, is a singleton. Because of this, it's important to undo the effects of the mock injection during tear down; otherwise, the mock could persist in the aspect and affect other tests. The correct tear down for this aspect is shown here:

  @Override
  protected void tearDown() throws Exception {
    try {
      HighlightResults.aspectOf().setHighlightUtil(original);
    } finally {
      super.tearDown();
    }
  }

Benefits and drawbacks

This pattern complements the previous one, except that it tests the crosscutting specification and context-handling of the aspect rather than the crosscutting behavior. Because you are not burdened by checking for indirect side effects in the outcome of the aspect, you can more easily stimulate corner cases in the join-point matching and context-passing behavior.

It's important to note that the benefits and drawbacks of delegating logic and then testing using mocks are similar whether you're applying the technique to objects or aspects. In both cases, you separate concerns and then validate each concern in a more isolated way.

There's one problem unique to aspects when it comes to injecting mocks. If you use singleton aspects (the default), any change you make to an aspect's fields, such as replacing one with a mock, must be undone at the end of the test. (Otherwise, the mock will hang around and may affect the rest of the system.) This tear-down logic is a pain to implement and remember. Writing a test-cleanup aspect to automatically reset aspects like the one in the example after each test is conceptually simple, but the details are beyond the scope of this article.


IV. Using mock targets

In this final section, I introduce a term I invented to describe a type of test helper that is useful in writing aspect tests: mock targets. In the pre-aspect world, a mock object denoted a class (handwritten or dynamically generated) that imitated a collaborator for some class you were attempting to test. Similarly, a mock target is a class that imitates a legitimate advice target for some aspect you are attempting to test.

To create a mock target, write a class that has some structure or behavior similar to that you would like to advise in production. For example, if you are interested in the highlighting of text returned by getter, you could write a mock target like this one:

//an inner class of the enclosing test case
public class HighlightMockTarget implements Highlightable {
  public String getSomeString() {
      return "I am a big bear!";
  }
}

Then, you would write your test case to verify that the aspect correctly interacts with the target, as shown in Listing 4:

Listing 4. Interacting with a mock target to test advice
public void setUp() throws Exception {
  super.setUp();

  setUpMockHighlightUtil();

  words = Collections.singleton("big");
  mockTarget = new HighlightMockTarget();
  mockTarget.setHighlightedWords(words);

}

//mock setup/tearDown omitted

public void testHighlighting() {
  mockUtil.expects(once())
    .method("highlight")
    .with(eq("I am a big bear!"), eq(words))
    .will(returnValue("highlighted text"));
  String shouldBeHighlighted = mockTarget.getSomeString();
  assertEquals(shouldBeHighlighted, "highlighted text");
}

Note that in this example, I combine mock targets with mock objects (as described in Section III, Pattern 2). Mock targets underpin the next three techniques.

Pattern 1. Test advice by extending an abstract aspect and providing a pointcut

Addresses: Crosscutting functionality

Summary: Prework: If necessary, rewrite your aspect to split it into an abstract aspect and a concrete aspect which extends it and concretizes one or more pointcuts.

Once you have an abstract aspect, create a mock target inside your test class. Create a test aspect that extends your abstract aspect. Have the test aspect supply a pointcut that targets your mock target explicitly. The test verifies that the advice in the aspect succeeds by either looking for a known side-effect of the advice or by using a mock object.

Example: Extending AbstractHighlighter

Assume that you've already written the test code from the previous section. To make the test pass, you would have to split the Highlighter aspect into an abstract aspect and a subaspect, as shown here:

public abstract aspect AbstractHighlighter {
  public abstract pointcut highlightedTextProperties();

  //... aspect continues
}

public aspect HighlightResults extends AbstractHighlighter {
  public pointcut highlightedTextProperties() :
  (
  //...define pointcut as before
  );
}

Next, you would extend the AbstractHighlighter aspect again with an aspect just for your test case. Here I show it as a static inner aspect of the test case:

private static aspect HighlightsTestClass extends AbstractHighlighter {
   public pointcut highlightedTextProperties() :
     execution(public String HighlightMockTarget.*(..));
}

This aspect concretizes the highlightedTextProperties pointcut by selecting all method executions on the mock target.

Benefits and drawbacks

Clearly, the test exercises an artificial situation. You're testing a fake aspect against a fake object. However, this simply means that you are not testing the real pointcut. You can still verify the advice and ITD code specified by the abstract aspect. In the example, the test verifies that the advice correctly marshals data from ITDs as well as the return value of the original join point, passes it to a utility class, and returns the new result. That's a non-trivial amount of behavior. Using a mock target also makes the test clearer because test readers will not have to reason about the behavior of a real target as well as the behavior of the aspect. This sort of test is particularly useful if you're writing unit tests for an aspect library, because there will be no real targets until the aspect is woven into a separate application.

If you split your aspect to take advantage of this pattern, you may also be making it more extensible. If new parts of the system need to participate in the highlighting behavior, for instance, they can simply extend the now-abstract aspect and define a pointcut that covers the new situation. In effect, the abstract aspect is decoupled from the system it advises.

Pattern 2. Test pointcut matching with mock targets

Addresses: Crosscutting specification and functionality

Summary: This technique relates closely to the previous one. Instead of extending an abstract aspect, this time you write your mock target so that it matches a pointcut on the aspect to be tested. You can test that the pointcut is correct by checking whether the aspect advises the mock target. If the pointcut you wish to test is overly specific, you may need to rewrite it so that the mock target can more easily "subscribe" to the advice.

Example: Testing pointcuts based on a marker interface

Instead of making the highlighting aspect abstract, you could rewrite your pointcut so that it matches method executions on the Highlightable interface:

public pointcut highlightedTextProperties() :
  execution(public String Highlightable+.get*());

This broad pointcut matches any String getter on a Highlightable. Because the pointcut does not enumerate specific classes, it already matches the getSomeString() method on the mock target. The rest of the test stays the same.

Variation: Using an annotation

You could also write your pointcut to match partially based on Java 5.0 metadata. For example, the following revised pointcut matches method executions that are decorated with the @Highlighted annotation:

public pointcut HighlightedTextProperties() :  
  execution(@Highlighted public String Highlightable+.*());

//you can apply the annotation in the source, or using the declare-annotation form
declare @method : public String SearchResult+.getTitle(..) : @Highlighted;
declare @method : public String SearchResult+.getProduct(..) : @Highlighted;

You can make the mock target match your new pointcut by adding the annotation to its getSomeString() method:

  @Highlighted
  public String getSomeString() {
    return "I am a big bear!";
  }

Benefits and drawbacks

This technique also clearly separates the testing of aspect behavior from the behavior of the target application, allowing the tests to be more self-contained. If your pointcuts were not already written to accommodate your mock targets, you could end up with a more decoupled aspect by rewriting them. By making your aspect general enough to affect a mock target inside a test class, you ensure that it's also easy for a real class to participate in the aspect's behavior.

Pattern 3. Verifying more complex pointcuts (a special case)

Addresses: Crosscutting specification and functionality

Summary: The previous mock target was simple, but you can also write mock targets to simulate complex join points (such as cflow()) or sequences of join points that you wish to affect.

Example: Simulating cflow

Let's say you wanted to turn off highlighting for downloaded reports. You could add a highlightExceptions pointcut to exclude any getters called by the ReportGenerator, as shown here:

public pointcut highlightedTextProperties() :
  execution(public String Highlightable+.get*())
  && !highlightExceptions();
  
public pointcut highlightExceptions() : 
  cflow(execution(* ReportGenerator+.*(..)));

Then you could write a mock ReportGenerator that called the HighlightMockTarget to test that no highlighting had occurred:

private class MockGenerator implements ReportGenerator {
  public void write(OutputStream stream) throws IOException {
    mockTarget.getSomeString();
  }
}

public void testNoHighlight() throws Exception {
  mockUtil.expects(never()).method("highlight");
  MockGenerator accessor = new MockGenerator();
  accessor.write(null);
}

However, you can imagine creating similar mock targets for more complex matching situations (for example, somePointcut() && ! cflowbelow(somePointcut())). Visualization tools do not give good information about matching for pointcuts that use runtime checks such as cflow(). Checking such pointcuts with a few representative mock targets is worthwhile.


In conclusion

When I see untested code, I get the jibblies. Code without a good test suite is typically buggy, hard to change with confidence, and poorly factored. If you implement your crosscutting behavior with aspects, however, you gain new ways to test (and understand) your application's crosscutting concerns.

Testing aspects is a lot like testing objects. In both cases, you need to break the behavior into components that you can test independently. A key concept to grasp is that crosscutting concerns divide into two different areas. First, there is the crosscutting specification, where you should ask yourself what parts of the program the concern affects. Second, there is the functionality, where you should ask what happens at those points. If you are only using objects, these two areas intertwine as your concern tangles itself throughout your application. However, with aspects, you can target one or both of these areas in isolation.

Writing aspects to be testable yields design benefits parallel to those achieved by factoring object-oriented code for testability. For instance, if I move my advice body into an independently testable class, I can analyze the behavior without necessarily needing to understand the way it crosscuts the application. If I modify my pointcuts to make them more accessible to mock targets, I also make them more accessible to non-test parts of the system. In both cases, I increase the flexibility and pluggability of the system as a whole.

A while ago, I heard a rumor circulating that aspect-oriented programs couldn't be tested. Although that rumor has mostly died out, I still think of it as a challenge. I hope that this article has demonstrated that not only can you test aspects, but that when it comes to testing crosscutting, you're a lot better off if you've used aspects in the first place.

Acknowledgments

This article owes much to Ron Bodkin, Wes Isberg, Gregor Kiczales, and Patrick Chanezon who reviewed earlier drafts and provided helpful insights and corrections.


Download

DescriptionNameSize
Article source: Eclipse 3.1/AJDT 1.3 projectj-aopwork11-source.zip337 KB

Resources

Learn

Get products and technologies

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Java technology on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Java technology
ArticleID=97550
ArticleTitle=AOP@Work: Unit test your aspects
publish-date=11012005