Experience remote usability testing, Part 1

Examine study results on the benefits and downside of remote usability testing

In this two-part article, Pervasive Computing specialists Velda Bartek and Deane Cheatham share the experience gained by conducting a number of remote usability studies using application-sharing technology. The first article provides a context for remote usability testing by detailing and describing the benefits and pitfalls of remote usability evaluations and the application-sharing tools that were evaluated. The second article describes some of the experiences and lessons learned as the authors planned for and conducted remote usability evaluations for software products.

Share:

Velda Bartek (bartekva@us.ibm.com), Senior Software Engineer, IBM

Velda Bartek is a User-Centered Design Specialist in IBM's Pervasive Computing Division. On weekends, she enjoys driving her mower around the yard. You can contact Velda at bartekva@us.ibm.com.



Deane Cheatham (dcheath@us.ibm.com), Human Factors Engineer, IBM

Deane Cheatham is a Human Factors Engineer in IBM's Pervasive Computing Division. She enjoys riding her white motorcycle on weekends. You can contact Deane at dcheath@us.ibm.com.



01 January 2003

With the adoption of new technology tools -- such as Lotus Sametime, which allows remote application sharing -- usability specialists can now access geographically distant and specialized users. These users, in the past, were difficult to integrate into studies thanks to costs and travel constraints. These tools provide a means for reaching such users and eliminate the costs associated with the travel and loss of time at work necessary when working with remote or specialized users.

In this two-part article, we share the experience we've gained by conducting a number of remote usability studies using the application-sharing technology. In this first article, we provide a context for remote usability testing by detailing and describing the benefits and pitfalls of remote usability evaluations and the application-sharing tools that we evaluated. In the second article, we describe some of the experiences and lessons learned as we planned for and conducted remote usability evaluations for our software products.

Before we dive into the article, though, we'd like to note that overall, conducting remote studies has been a convenient and positive experience for both the tester and user.

Who we are and what we do

Our group -- IBM's Pervasive Computing Division -- is responsible for conducting usability testing and collecting user feedback. Testing includes design validations, evaluations, and walkthroughs with low-fidelity prototypes.

In our group, testing focuses on server and middleware software products. IBM WebSphere Portal (WP) -- a suite of products bundled together with non-standalone product components -- is the primary focus of our usability activities.

With WP, end client software is limited, however; end users may be authorized to perform certain tasks such as customizing the layout and content of portal pages. Due to the limited number of end user tasks, most of our testing relates to portal administration function and the installation of the WP offerings.


An answer to some problems

We began considering remote usability testing for a number of reasons:

  • It was hard to find local WP customers.
  • Our development teams were spread out.
  • It was costly (in time and money) to include non-local customers in testing.

The primary reason, though, was the difficulty in locating local WP customers. As the portal arena is relatively new, identifying and locating portal administrators and installation specialists has been a difficult task; finding local participants has been practically impossible. So, most of our participants had to be non-local; many are outside of the US, in fact.

Including non-local and international users is a major challenge due to the costs associated with travel and time away from the participant's primary job to participate in a three-hour test session. Add to that the headaches associated with obtaining passports and visas for international users and testing becomes an expensive, logistical nightmare.

Another reason to use remote testing was the diverse locations of our development teams. These teams are located in multiple states within the US as well as in other countries. Remote testing is the only way they have the opportunity to observe customer interaction with the product.

And last, but not least, budget constraints have significantly decreased the availability of money to fund participant travel expenses.

For these reasons, remote testing became an ideal solution.

We've conducted remote usability tests with 42 users for 9 different activities. Some of the tasks tested have included installation, administration, end user, system planning, and Web application development. We have also used the online meeting tools for a variety of usability testing methods such as design validation, design exploration, and task analysis. For the most part, the duration of testing has been limited to three hours for validation testing, and one to two hours for design explorations.


Evaluating online meeting tools

A number of online meeting tools are available, such as Lotus Sametime, Microsoft NetMeeting, Web Collaboration, VoiceTech, WebEx, Virtual Network Computing, and others. Lotus Sametime was used for the testing we conducted, but the other meeting tools have similar features.

Figure 1 shows the initial window for the Sametime remote meeting tool.

Figure 1. Lotus Sametime's initial window features
Lotus Sametime's initial window features

Features that make the tools important

These features made this online meeting tool appropriate for usability testing:

  • Application sharing capability. The test facilitator can share his entire screen or can choose an application to share. In addition, the facilitator can give the usability participant control of an application such that the participant can interact with the software as if it was on his local machine. Figure 2, following this list, shows a shared application from the participant's perspective. Windows can be resized so the application window is maximized.
  • Whiteboard for sketching ideas. The user and test facilitator can use the whiteboard for generating quick mock-ups of design ideas.
  • Online chat capability. With this feature, individual logged into the session can communicate to the session group as a whole or privately send messages to individuals who are in the session. This type of tool is useful for focus groups so they can summarize points made during the session; it is also useful as a forum for participants to discuss ideas. And it is useful for allowing the test facilitator to inconspicuously communicate with developers or other appropriate team members attending the session when problems are encountered or to answer technical questions.
Figure 2. Application sharing with Lotus Sametime
Application sharing with Lotus Sametime

Other tools can be used in conjunction with the online meeting technology to improve the testing process and communication during the usability session.

Listening is an important tool, too

Although the online meeting tool provides the visual means for conducting the usability session, verbal communication is also important. For this reason, conference calls or video conferencing should be set up such that all parties in the session can listen in and speak on a single line.

In the remote usability testing we have conducted, the lines are open for everyone attending the session, but only the test facilitator's and participant's lines are open while observers are in a listen only mode.

Survey tools also play a role

Online survey tools are important because they allow the facilitator to send the participant a link to the survey that can be completed before, during, or after a test *depending on the purpose of the survey). This minimizes the amount of time participants spend printing out materials and formatting documents that they complete. In addition, it maintains the anonymity of the survey respondent.


The advantages of remote testing

The obvious, and greatest, advantage to conducting usability tests remotely is that it makes a larger and more diverse pool of participants accessible. Additionally:

  • Rather than depending on local product users, a worldwide audience can be reached.
  • Specialists who may otherwise be hard to locate or who cannot be away from the job can also be accessed, since the necessary tasks (a two- or three-hour test) can be performed remotely without having to travel or spend extended time away from work.

Remote testing not only minimizes time away from work, but it also minimizes the test's interference with the participant's personal commitments. This ability increases the likelihood of getting those hard-to-find specialists (as well as other users, unique or general) to participate.

Another major advantage to remote testing is its cost-effectiveness for the group conducting the test. Using online meeting tools to conduct remote testing eliminates travel expenses and the effort necessary to set up and complete the test, such as completing reimbursement forms and, in the case of foreign participants, obtaining passports and visas.

Ultimately, we can't emphasize enough this important advantage of remote testing: Participants can use the product in a more realistic environment. Face-to-face testing typically requires the participant to perform tasks in a lab setting with the facilitator next to them and observers sitting outside. For many, this leads to anxiety and does not reflect their real-world working environment.

During our testing sessions, we experienced some of the everyday interruptions users encounter in a realistic job setting. For example, during one session there was construction noise at the customer's site (and ours!). In other sessions, co-workers came in the room and phones rang while the participant was interacting with product. Although the test facilitator experiences a loss of control when these things happen, these are normal conditions under which a user will perform work-related tasks.


The disadvantages of remote testing

As with any testing method, remote testing also has disadvantages. One of the most obvious drawbacks, as we just noted, is the loss of control of the participant's environment. There is no control over the number of people attending the session with the participant -- phones may ring, e-mail may intrude, and there can be other interruptions like that construction noise. Just remember that while this can be perceived as a disadvantage, it also lends credibility to the validity of the data outside of the lab setting.

Other disadvantages inherent in remote testing include:

  • Limited visual feedback
  • Session security concerns
  • System and connection performance issues
  • Ease-of-use issues

Visual feedback

Limited visual observation can be a disadvantage. In face-to-face meetings, the facilitator can also note non-verbal reactions (as well as the verbal ones) when using the product.

Remote testing relies solely on what the participants say and the interactions they make that can be viewed on the display. The facilitator cannot use non-verbal cues to determine if the participant is tired, frustrated, or confused.

A way of minimizing such problems is to encourage the participant to think aloud as he performs tasks, but the facilitator should be explicit in instructing participants to be verbal when they are confused or frustrated, as well as when they like or are pleased by a feature of the product.

Remember, though, that some users are more vocal than others, so the facilitator should pay careful attention to the screen and other cues that indicate the participant is having a problem and then use questions to probe and solicit more information from the participant.

Session security

Session security is another concern, particularly if the material is confidential. It is not always obvious that the participant has other people in the room observing; also, the participant could take screen captures of the user interface without the facilitator's knowledge.

Prior to each session, we instruct participants that the material is confidential and to get rid of all materials at the conclusion of the session. One way of reinforcing that is to confirm that the participant has a confidential disclosure agreement in place before the session begins.

Performance issues

In some cases, system performance and reliability can be issues when remote testing. Servers where the application resides may go down or slow under a heavy traffic load. Network connectivity within either the company or on the Internet can also affect performance.

It is best to use online meeting tools when both the customer and the test facilitator have fast connections, otherwise the application can hang or there may be an increase in lag time between when the action is performed and actually executed.

Poor performance is also in evidence in screen refresh delays. Refresh and response delays are often not apparent on the facilitator's screen. Of the 42 remote sessions we conducted, we experienced some degree of performance degradation in approximately a one quarter of them. Only one session, however, was degraded to the point that we rescheduled it.

Another problem we encountered was the firewalls at the participant's site. In our experience, we ran into this concern when working with financial institutions. We were forced to cancel one remote usability session due to the participant's firewall.

Lab versus remote testing: Ease of use issues

Of course, it is easier to conduct a lab test than a remote test. From our experiences, these are the disadvantages of remote testing over lab testing:

First, participants may not set up for and test the online meeting tool prior to the session. Others may have problems getting the tool to work correctly. It is difficult for the facilitator to troubleshoot remotely because, in addition to not being able to see the participant's screen (remember, they don't have the software up and running yet), the set-up process is different for internal and external users.

To avoid these set-up problems, try to schedule a 15-minute session with participants prior to the actual remote testing session to ensure they have properly installed the software for the session and have all materials ready.

A second consideration relating to the ease of conducting remote tests is the additional amount of work the test facilitator may need to perform to put surveys online, schedule online meeting sessions and conference calls, and conduct practice sessions to ensure that the participant is set up correctly. Depending on the number of users, this adds up to quite a bit of time.

A third consideration presents itself when you test in a lab setting. There, you have the ability to videotape both the physical and verbal reactions of participants. Without the addition of a Web camera to your remote setup, only verbal responses and recordings of the participant's interaction with the GUI by way of the visual display is possible.

A final, and fourth, consideration is that problems can arise with participant survey responses. Unlike a lab setting (in which participants are asked to complete surveys in the presence of the facilitator), remote participants are left to complete and return the survey following the completion of the session. Even when participants are asked to complete surveys at various points during the session, there is no guarantee that they will do so and return them.

Facilitators should stress the importance of returning such feedback at the end of the usability session and plan to send a reminder note to participants.


When to use remote testing

Remote testing is appropriate for both Web-based and desktop applications. Remote testing is not recommended for handheld device testing by means of simulators for a number of reasons.

The main issue with handheld device testing is that simulator behavior is not always the same as the actual device. In addition, interacting with the simulator is not the same experience as actually interacting with the device. For instance, it is difficult to determine the annoyance factor that may affect user satisfaction and perceived ease of use when using a device (taps versus clicks; using fingers on small phone buttons; and so forth.).

Remote testing is not recommended for sessions lasting longer than three hours.

It is also not suitable for some forms of testing, such as user out-of-the-box experiences. Such testing generally lasts a full day and requires the user to bounce between numerous tasks and documents, which is too cumbersome when using an online meeting tool.


Remote testing recap

The costs associated with accessing users for usability testing, as well as the limited availability of many users, make it necessary to consider new methods for retrieving usability feedback for software products. Online, remote testing is a key to achieving a larger, more diverse pool of participants, but it does have its drawbacks.

Using online meeting tools allows test facilitators to reach an international audience and eliminate travel costs for using non-local participants. As this article described, there are a number of issues to consider before choosing to use remote participants. Here, to recap, are the considerations to weigh for this type of testing:

  • The facilitator can select from a larger, more diverse pool of participants.
  • Dispersed development teams can share a common user experience.
  • Travel, time, and other related costs are greatly reduced or eliminated.
  • Facilitators can observe product usage in a real-world environment.
  • Without the addition of a Webcam, the facilitator's ability to receive non-verbal cues is limited.
  • Company security might be compromised.
  • Poor server or connection performance can make completing the test difficult or cause rescheduling.
  • More work is required to prepare for and clean up after a remote session.

(If you can't determine which of the above are advantages and which are disadvantages, read the article again.)

In the second half of this article, we describe some of the experiences and lessons that we learned while planning for and conducting remote usability evaluations for our software products.

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Web development on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Web development
ArticleID=11740
ArticleTitle=Experience remote usability testing, Part 1
publish-date=01012003