Tackling the multichannel testing challenge

Testing that moves between interfaces, from mobile to web and back again

With the viral spread of mobile and web-enabled applications, there are new challenges to multichannel testing, or interleaving a single test scenario across several interfaces. Business processes and even tasks within those processes are executed on a wider variety of platforms, and it’s becoming a requirement to move seamlessly between interfaces, particularly from mobile to web and back again. Looking at approaches that have worked in the past leads to a discussion about tackling the future.

Share:

Monica Luke (mluke@us.ibm.com), Lifecycle Scenario Architect, IBM

author photoMonica Luke has almost 20 years experience in software engineering. She joined IBM Rational software nine years ago in the test organization. Since then, Monica led several test automation teams, held the role of test automation architect, and earned an Outstanding Technical Achievement Award for a test automation framework that is widely used internally at IBM. In 2010, she moved into the IBM Rational Strategic Offerings team, helping to drive integrations to accelerate client value across the Collaborative Lifecycle Management tools, including the recorded demos for the "Five ALM Imperatives," which are available at jazz.net/blog. In 2012, Monica is leading the effort to accelerate agile testing in a Collaborative Lifecycle environment with the Green Hat technology.



18 September 2012

Also available in Chinese Portuguese

Multichannel describes applications that have more than one interface. It's becoming more common as we have evolved from desktop to web-based computing and into the world of mobile computing. There are more and more interfaces for the same application due to the combination of devices (tablets, phones, laptops, desktop computers) and ways to interact with them (device-specific "apps," browsers, and traditional client applications). For example, consider banking applications that use the same business logic for a web application, a mobile application, and maybe a CLI (command-line interface). As service-oriented architecture (SOA) and web services become more prevalent, in a lot of cases, the work that integrators are doing is recombining services with new front ends. But the business logic (aka service) remains the same.

In the same way that development teams are reusing code to reduce maintenance costs and increase productivity, test teams need ways to reuse test scenarios and automation to keep up.

Meeting the challenge of multichannel testing

A few years ago, I was the Test Automation Architect responsible for building all of the automation for not one but two applications that had multiple interfaces. They both had legacy "native" user interfaces that used Microsoft Windows 32-bit MFC (Microsoft Foundation Class) controls, a web interface that used JavaScript, ASP and JSP (Active Server Pages and JavaServer Pages), a new(er) Eclipse SWT (Standard Widget Toolkit) interface, as well as a command line interface. Of course, it's impossible to find a single automation tool that can execute against all of those interfaces, but let's leave that aside for the moment.

When programming, you are always focused on reducing the amount of code to maintain by fostering reuse. With object-oriented programming and refactoring, there is rarely a good reason to have the same code in more than one place. However, it took designing an architecture, so that I could think about how to address testing multiple interfaces from a single test automation code base. For one thing, although these were all interfaces for the same application, not all interfaces surfaced the same functionality, let alone in the same way. But there were many customer-focused scenarios (use cases) that were meaningful to test across all of the interfaces.

However, the test teams responsible for designing the test cases and test plans didn't think about their testing this way. In fact, they were disconnected, separated in silos according to the interface they were working on. The team that built and tested the CLI thought they needed only a handful of customer scenario tests. They were focused primarily on unit tests and did not really consider a customer flow through the CLI. The test team responsible for the Eclipse UI wanted a large number of UI features and functions automated. They had a long list of test cases to be executed that were completely focused on customer flow to achieve this goal. But why couldn't we use this information, painstakingly completed by the subject matter experts (SME) for the application to test all the interfaces?

A hierarchical approach

Typical test automation frameworks using object-oriented programming (OOP) abstract the implementation details of the control set rather than the conceptual action expressed through the controls. This is actually the approach that many commercial GUI automation tools use. For example, all text fields will take text, define a textfield class with a setText(string) method, and use that on all versions of the application. But that doesn't work in all cases when building test automation to work across interfaces. What happens when one GUI uses a radio button and another uses a check box? You cannot actually rely on the interfaces being the same for the same operation. The following shows this traditional OOP approach.

Figure 1. GUI control hierarchy
Textfield setText (string arg0) at top

In our case, the interfaces varied widely, but the operations and business processes represented were substantially the same. To maximize reuse, we settled on a business logic hierarchy (see Figure 2) to enable reuse of test scenarios across multiple interfaces. This not only maximized our code reuse, but it meant relying on the test tools for managing the GUI interfaces, which is exactly what they are designed to do. Figure 2 shows the approach, and you might recognize the abstract factory pattern.

Figure 2. Business logic hierarchy
Core application, abstract classes at top

By taking a business logic approach, every flow through the application could be expressed as the set of operations that were defined as methods on the abstract classes. Although each interface might have some different steps within the operation, they all allowed, understood, and needed the same operations. This amounted to building an application-specific test framework that represented the application under test's business tasks for the purpose of testing. The approach meant defining a set of objects, where data and information that were needed to perform operations within the application could be encapsulated. Then, we needed to define a set of methods in those objects to describe the operations and collect any additional data needed exclusively for the operation, such as what is defined by the code snippet in Figure 3.

Figure 3. Code example for abstract class
Code example screen capture

View Figure 3's code in text form.

Query is an abstract class that collects the interesting data for creating and executing queries, along with the interesting operations, such as run, create, edit, and rename. The rename method requires the additional parameter of the new name, but when it succeeds, it automatically updates the name value of the query object. There are no assumptions about the user interface at this level. The user interface details are expressed only in the interface-specific concrete classes. To execute on a given interface, you need to instantiate a concrete class at runtime for each interface and call the operation, which looks something like this:

Query myQuery = (parent_location, findRecords);
myQuery.rename(renamedQuery);

Application-based framework

As a consequence of defining abstract business logic classes that described the operations that needed to be tested, it was possible to recombine the defined methods into new flows. It was also possible to specify at run time what interface to run on. This is a powerful combination that accomplishes several things:

  • Operations for the application are defined only once
  • GUI elements are discovered and manipulated only once
  • Test scenarios designed for one interface can be run on another
  • Maintenance is dramatically reduced, because reuse increases

The trick, of course, was coding how to do any given operation in each of the interfaces to test. That was a significant amount of work up front, but after one interface was finished, there were many test scripts that could be used for any other interface. This is all that had to be implemented for each subsequent interface:

  • Add the actual steps to execute the operation on that interface
  • Supplement the framework to cover any functionality not present in other interfaces

Another benefit of the approach was that, despite still being expressed as code, it reads as pseudo code at a high enough level of abstraction to be meaningful to non-programmer SMEs. This allows non-programmers to create new automated scripts, as well as to run and understand the test scripts delivered by the automation team.

Listing 1 is an example of code that interacts with Eclipse views.

Listing 1. Code that interacts with Eclipse views
Perspective resource = new Perspective("Resource");
Perspective general = new Perspective("General");
app.start();
EclipseView bookmarks = new EclipseView("Bookmarks", resource);
EclipseView explorer = new EclipseView("Project Explorer",general);
resource.open();
resource.reset();
bookmarks.open();
explorer.open();
bookmarks.switchTo();
explorer.switchTo();
bookmarks.maximize();
bookmarks.restore();
bookmarks.minimize();
bookmarks.restore();
bookmarks.close();
resource.reset();
app.exit();

This approach was adopted to reduce maintenance costs and ensure that, for any part of the GUI, there would be only one place to make adjustments in the test automation code. But the benefits of this approach became really clear after implementing the concrete steps for doing the business logic on the CLI.

The team responsible for testing that feature had declared it "done, " with no significant defects. The automation was implemented to ensure a regression suite for future releases. But when we ran the test scripts designed to test a GUI against the CLI implementation, we found more than 50 defects! And these were important defects that definitely would have been found by our customers.

As testers, we are always excited to find the defects first, because of the obvious cost savings of finding problems early. Plus, it's exciting to build automation that does more than validate the product's stability. It is also important not to forget the business benefit in terms of reputation, perceived customer satisfaction, and just overall quality improvements delivered through this test automation approach.


Multichannel testing today

During the course of the projects described above, we were able to choose only a single interface at run time. A test script had to be completely automated, and it had to have a complete set of business logic operations implemented for each interface to be tested. This approach was actually reasonable and acceptable at that time, because an end user would not typically switch from a desktop client to a web client in the middle of a set of operations.

Things have changed, though. It's not unreasonable to think about a test scenario that might start on a web client, move through a mobile app and then back to web, and maybe an independent verification against the database backend.

Take a scenario where someone is bidding on eBay. It's easier to search and do research on items using a desktop computer and a browser (web client). Once you decide on the item you want, you put a bid in. You might leave your computer and get a notification on your smartphone that you've been outbid, so you update your bid from the phone. When you win the bid, you're back at the computer, so you enter payment information in the browser.

In this is a test scenario, rather than validate the success of the transaction from a portion of the interface on the screen (using screen scraping or object properties), it would be better to use the database and check for the record directly. This interface-independent verification is more robust, as well as more stable. We might call this approach a "hybrid" test scenario. And, theoretically, hybrid scenarios should allow mixing in manual test execution to improve test coverage when some product areas are too hard or expensive to automate.

So I've started dreaming about how we might implement a flow like that by cobbling together different interfaces and operation implementations into a complex flow that seamlessly moves between interfaces. To be sure, there are challenges, and they might turn out to be insurmountable. It was relatively simple to know what data to move around between operations when the runtime environment was constrained. It's not immediately clear what data might need to be transferred between operations when it's also crossing interfaces. Using the above example, it will require a login on the web client and on the mobile phone. Authentication is an obvious problem, but there are bound to be lots of complications to implementing these hybrid test scenarios.

But that's okay, it's just Mount Everest. Let's go! Tell us what you think or add your ideas, either by joining the discussion in the IBM Rational Functional Tester Network on LinkedIn or by submitting a comment to this article.

Resources

Learn

Get products and technologies

  • Download a free trial version of Rational software.
  • Evaluate other IBM software in the way that suits you best: Download it for a trial, try it online, use it in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement service-oriented architecture efficiently.

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Rational software on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Rational, Mobile development
ArticleID=834905
ArticleTitle=Tackling the multichannel testing challenge
publish-date=09182012