Understanding your users is not rocket science, it's a lot tougher. Human data brings a boatload of confusion. How do you find out what users want when they probably don't know themselves? How can you ensure honest feedback, especially in cultures where agreement is heavily valued? You gather data as best you can, make decisions based on that data, implement solutions, and then re-evaluate with users to see where you missed the mark. It is a circular path that leads you a little bit closer each cycle to an unattainable "best" solution.
Users can give you two types of data: what they tell you (stated data) and what you can observe them doing (behavioral data). Stated data includes data you get from surveys, user registration, and focus group discussions. You ask, they tell. Behavioral data includes page hits, incoming user ip addresses, activity levels on forums or blogs, task data on wireframe scenarios. You watch, they do. You need both types of data to get a balanced understanding of the discrepancies and the cohesion between what your users say and what they actually do.
If you have a budget that allows a legitimate study, with usability folks traveling in country to conduct focus groups and wireframe tests, you don't need advice from me. But if you are tight on money and looking for some guerrilla tactics for collecting data, let me share some thoughts from our usability maven, Jeanette Fuccella.
There are plenty of reputable firms, such as Gartner, Forrester, and IDC that provide free information and newsletters about market trends and data. Search the web for what you can learn without traveling. These firms also provide detailed global MI reports that can be purchased. The cost may seem pricey, but it is a lot less expensive than conducting the research yourself.
If you decide to conduct your own user studies, consider the following:
1. Be very clear and finite in defining the goal of each study. Gathering user data is an iterative process; you gradually zero in on the answers. There is no study to end all other studies. Ask the most pressing questions now, save the other questions for the inevitable later studies.
2. Limit yourself to no more than four or five issues per study. As your formulate the questions (if you are gathering stated data) or the tasks (if you are gathering behavioral data), make sure each corresponds directly to an issue you want settled. Jeanette recommends writing those issues down, putting them in front of you as you review your test questions or actions. Ruthlessly throw out any questions or scenarios that do not directly relate to at least one of the issues in front of you. Of course, you can always save those questions for later studies, when you will be honing in on other aspects of your site design or function.
3. Know your local audience. This is the group of users that are easiest and most cost effective for you to study. The more you know about your local audience, the more clues you have about your remote audiences.
4. Recognize that how you collect the data will skew the results. If you conduct an online survey, you will only reach the group of users who are willing to take the time to complete a survey. If you conduct face-to-face research at a conference or in a coffee shop, you are only reaching users who attend those events or haunt those locations. Vary your research methods to reach a wider scope of users.
5. Be sensitive to user fatigue. You want users to remain actively engaged in your study. Nothing can turn off a respondant more than a set of never ending questions or tasks. Limit your questions and requests.
6. To reach users in remote locations, consider working through their local universities. You can mine the Psychology or Human Factors departments for coops to run simple wireframe tests or conduct face-to-face surveys. If you are working with graduate students, they may be able to help you rethink your strategy for reaching users in their local geography.
7. Run a pilot of any test you conduct. Gather about a tenth of the data you want to eventually obtain. Analyze that data and make sure you are addressing those issues you identified back in Step 2. You will probably find some misleading or confusing questions or some tasks that need to be restructured. Assume your first attempt at any study is inherently flawed and will need to be refined.
8. Consider incentives. If offered, incentives are one of biggest costs of any study, and are often forgotten in the initial budget. Jeanette recommends you incent respondents equally. So, if conducting research in a coffee shop, maybe you buy each person willing to talk to you a cup of coffee for their time. Or provide a small token gift to anyone willing to fill out a survey.
9. Avoid contests. Contests are fraught with legal issues. There are all types of laws governing contests, as anyone knows who has read the fine print on an entry form. You will end up spending more time learning and adhering to local contest regulations than analyzing your data.
10. Use your first cycle of test data to formulate your second cycle of testing. Building a site that adequately addresses user needs requires an ongoing communication with users. What did we do right? Where do we need to modify? User needs and requirements are always changing, and your site, to remain accepted and effective, has to evolve along with those needs.