Multichannel describes applications that have more than one interface. It's becoming more common as we have evolved from desktop to web-based computing and into the world of mobile computing. There are more and more interfaces for the same application due to the combination of devices (tablets, phones, laptops, desktop computers) and ways to interact with them (device-specific "apps," browsers, and traditional client applications). For example, consider banking applications that use the same business logic for a web application, a mobile application, and maybe a CLI (command-line interface). As service-oriented architecture (SOA) and web services become more prevalent, in a lot of cases, the work that integrators are doing is recombining services with new front ends. But the business logic (aka service) remains the same.
In the same way that development teams are reusing code to reduce maintenance costs and increase productivity, test teams need ways to reuse test scenarios and automation to keep up.
Meeting the challenge of multichannel testing
When programming, you are always focused on reducing the amount of code to maintain by fostering reuse. With object-oriented programming and refactoring, there is rarely a good reason to have the same code in more than one place. However, it took designing an architecture, so that I could think about how to address testing multiple interfaces from a single test automation code base. For one thing, although these were all interfaces for the same application, not all interfaces surfaced the same functionality, let alone in the same way. But there were many customer-focused scenarios (use cases) that were meaningful to test across all of the interfaces.
However, the test teams responsible for designing the test cases and test plans didn't think about their testing this way. In fact, they were disconnected, separated in silos according to the interface they were working on. The team that built and tested the CLI thought they needed only a handful of customer scenario tests. They were focused primarily on unit tests and did not really consider a customer flow through the CLI. The test team responsible for the Eclipse UI wanted a large number of UI features and functions automated. They had a long list of test cases to be executed that were completely focused on customer flow to achieve this goal. But why couldn't we use this information, painstakingly completed by the subject matter experts (SME) for the application to test all the interfaces?
A hierarchical approach
Typical test automation frameworks using object-oriented programming (OOP) abstract the implementation details of the control set rather than the conceptual action expressed through the controls. This is actually the approach that many commercial GUI automation tools use. For example, all text fields will take text, define a textfield class with a
setText(string) method, and use that on all versions of the application. But that doesn't work in all cases when building test automation to work across interfaces. What happens when one GUI uses a radio button and another uses a check box? You cannot actually rely on the interfaces being the same for the same operation. The following shows this traditional OOP approach.
Figure 1. GUI control hierarchy
In our case, the interfaces varied widely, but the operations and business processes represented were substantially the same. To maximize reuse, we settled on a business logic hierarchy (see Figure 2) to enable reuse of test scenarios across multiple interfaces. This not only maximized our code reuse, but it meant relying on the test tools for managing the GUI interfaces, which is exactly what they are designed to do. Figure 2 shows the approach, and you might recognize the abstract factory pattern.
Figure 2. Business logic hierarchy
By taking a business logic approach, every flow through the application could be expressed as the set of operations that were defined as methods on the abstract classes. Although each interface might have some different steps within the operation, they all allowed, understood, and needed the same operations. This amounted to building an application-specific test framework that represented the application under test's business tasks for the purpose of testing. The approach meant defining a set of objects, where data and information that were needed to perform operations within the application could be encapsulated. Then, we needed to define a set of methods in those objects to describe the operations and collect any additional data needed exclusively for the operation, such as what is defined by the code snippet in Figure 3.
Figure 3. Code example for abstract class
Query is an abstract class that collects the interesting data for creating and executing queries, along with the interesting operations, such as run, create, edit, and rename. The rename method requires the additional parameter of the new name, but when it succeeds, it automatically updates the name value of the query object. There are no assumptions about the user interface at this level. The user interface details are expressed only in the interface-specific concrete classes. To execute on a given interface, you need to instantiate a concrete class at runtime for each interface and call the operation, which looks something like this:
Query myQuery = (parent_location, findRecords); myQuery.rename(renamedQuery);
As a consequence of defining abstract business logic classes that described the operations that needed to be tested, it was possible to recombine the defined methods into new flows. It was also possible to specify at run time what interface to run on. This is a powerful combination that accomplishes several things:
- Operations for the application are defined only once
- GUI elements are discovered and manipulated only once
- Test scenarios designed for one interface can be run on another
- Maintenance is dramatically reduced, because reuse increases
The trick, of course, was coding how to do any given operation in each of the interfaces to test. That was a significant amount of work up front, but after one interface was finished, there were many test scripts that could be used for any other interface. This is all that had to be implemented for each subsequent interface:
- Add the actual steps to execute the operation on that interface
- Supplement the framework to cover any functionality not present in other interfaces
Another benefit of the approach was that, despite still being expressed as code, it reads as pseudo code at a high enough level of abstraction to be meaningful to non-programmer SMEs. This allows non-programmers to create new automated scripts, as well as to run and understand the test scripts delivered by the automation team.
Listing 1 is an example of code that interacts with Eclipse views.
Listing 1. Code that interacts with Eclipse views
Perspective resource = new Perspective("Resource"); Perspective general = new Perspective("General"); app.start(); EclipseView bookmarks = new EclipseView("Bookmarks", resource); EclipseView explorer = new EclipseView("Project Explorer",general); resource.open(); resource.reset(); bookmarks.open(); explorer.open(); bookmarks.switchTo(); explorer.switchTo(); bookmarks.maximize(); bookmarks.restore(); bookmarks.minimize(); bookmarks.restore(); bookmarks.close(); resource.reset(); app.exit();
This approach was adopted to reduce maintenance costs and ensure that, for any part of the GUI, there would be only one place to make adjustments in the test automation code. But the benefits of this approach became really clear after implementing the concrete steps for doing the business logic on the CLI.
The team responsible for testing that feature had declared it "done, " with no significant defects. The automation was implemented to ensure a regression suite for future releases. But when we ran the test scripts designed to test a GUI against the CLI implementation, we found more than 50 defects! And these were important defects that definitely would have been found by our customers.
As testers, we are always excited to find the defects first, because of the obvious cost savings of finding problems early. Plus, it's exciting to build automation that does more than validate the product's stability. It is also important not to forget the business benefit in terms of reputation, perceived customer satisfaction, and just overall quality improvements delivered through this test automation approach.
Multichannel testing today
During the course of the projects described above, we were able to choose only a single interface at run time. A test script had to be completely automated, and it had to have a complete set of business logic operations implemented for each interface to be tested. This approach was actually reasonable and acceptable at that time, because an end user would not typically switch from a desktop client to a web client in the middle of a set of operations.
Things have changed, though. It's not unreasonable to think about a test scenario that might start on a web client, move through a mobile app and then back to web, and maybe an independent verification against the database backend.
Take a scenario where someone is bidding on eBay. It's easier to search and do research on items using a desktop computer and a browser (web client). Once you decide on the item you want, you put a bid in. You might leave your computer and get a notification on your smartphone that you've been outbid, so you update your bid from the phone. When you win the bid, you're back at the computer, so you enter payment information in the browser.
In this is a test scenario, rather than validate the success of the transaction from a portion of the interface on the screen (using screen scraping or object properties), it would be better to use the database and check for the record directly. This interface-independent verification is more robust, as well as more stable. We might call this approach a "hybrid" test scenario. And, theoretically, hybrid scenarios should allow mixing in manual test execution to improve test coverage when some product areas are too hard or expensive to automate.
So I've started dreaming about how we might implement a flow like that by cobbling together different interfaces and operation implementations into a complex flow that seamlessly moves between interfaces. To be sure, there are challenges, and they might turn out to be insurmountable. It was relatively simple to know what data to move around between operations when the runtime environment was constrained. It's not immediately clear what data might need to be transferred between operations when it's also crossing interfaces. Using the above example, it will require a login on the web client and on the mobile phone. Authentication is an obvious problem, but there are bound to be lots of complications to implementing these hybrid test scenarios.
But that's okay, it's just Mount Everest. Let's go! Tell us what you think or add your ideas, either by joining the discussion in the IBM Rational Functional Tester Network on LinkedIn or by submitting a comment to this article.
- Check the Quality management and testing page to find out what's available from IBM.
- Visit the Rational software area on developerWorks for technical resources and best practices.
- Subscribe to the developerWorks weekly email newsletter, and choose the topics to follow.
- Stay current with developerWorks technical events and webcasts focused on a variety of IBM products and IT industry topics.
- Attend a free developerWorks Live! briefing to get up-to-speed quickly on IBM products and tools, as well as IT industry trends.
- Watch developerWorks on-demand demos, ranging from product installation and setup demos for beginners to advanced functionality for experienced developers.
Get products and technologies
- Download a free trial version of Rational software.
- Evaluate other IBM software in the way that suits you best: Download it for a trial, try it online, use it in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement service-oriented architecture efficiently.
- Join the Rational Functional Tester group on LinkedIn
- Join the Rational software forums to ask questions and participate in discussions.
- Ask and answer questions and increase your expertise when you get involved in the Rational forums, cafés, and wikis.
- Join the Rational community to share your Rational software expertise and get connected with your peers.
- Rate or review Rational software. It's quick and easy.
Dig deeper into Rational software on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Experiment with new directions in software development.
Software development in the cloud. Register today to create a project.
Evaluate IBM software and solutions, and transform challenges into opportunities.