4. We tested “anything”
By testing items that were not complete, we received excellent feedback about the end-to-end customer scenarios and important features that were needed.
For example, we tested various designs to validate the general flow of a task from the business perspective. The feedback received from a design validation session helped us determine the right choices to provide. We also tested draft documentation to identify gaps and to validate the effectiveness of search results.
In addition to testing working code and documentation during usability sessions, our team conducts validation sessions for features that are in the design phase. In these sessions, we are gathering feedback from customers from the start. This way, they are helping to shape the product functions.
5. We involved everybody
For all usability sessions, in-person and remote, our team encourages stakeholders to participate in listen-only mode. This process helps the team understand the customer’s perspective because everybody can see where the users are confused and if they are struggling to complete their tasks.
6. We recorded feedback
During the sessions, we noted all comments: positive feedback, defects discovered, questions and comments from participants, and our own observations of usability issues. We also recorded all sessions for further analysis.
Our team uses a wiki to store the information that we collect for each sprint, session, and feature tested. We are using IBM Rational Team Concert (RTC) to open defects, to track how many defects have been fixed at any point in time (and for which feature), and to remind us what issues still need to be addressed.
7. We presented results
After each set of usability sessions, we write up a report to show the results of the usability sessions to the team. Then the usability team gets together with the scrum team to assess all issues. They start working on smaller changes right away. Items that require big changes are promoted to user stories and are evaluated by product owners for future sprints.
In the report, we summarize the ratings and overall task success, the positive feedback, and the number of defects opened. We also store this report in the wiki along with the notes from the sessions.
Conclusion: We improved usability
We proved that by implementing these tips, and by keeping the focus on responding to change in a timely manner, an Agile development team can integrate effective usability testing successfully.
We took the following actions based on usability issues identified in our testing:
- Simplified the UI panels and flow
- Made the terminology industry-friendly and consistent
- Redesigned a few features that were not aligned with customer goals
As a result, when we retested those items in future sessions, we improved the original ratings from 2-3 (difficult) to 5 (very easy).