Watching real users use your product or prototype is really the only way to truly evaluate whether your design is sufficiently usable and learnable. User observation sessions quickly reveal where users have problems figuring out your product. Let’s take a look at running effective user observation sessions.
Some textbooks recommend setting up a formal usability testing lab with one-way mirrors and multiple cameras, and insist upon highly structured sessions with a full team of facilitators, observers and recorders. If you can afford this, then great, but these ideas make user observation seem more complicated and mysterious than it really is.
Not only are formal laboratory environments expensive, but they can make participants feel uncomfortable. Being watched by a team of people and recorded by cameras will make participants nervous, as if they’re performing for an audience.
To make people feel more relaxed, you can get great results simply sitting one-on-one with a participant in front of a laptop in a neutral and comfortable setting like a coffee shop. If you’re building a product that will be used in a particular environment, try to do the session in that environment: If you’re building enterprise software, sit down together at your users’ desks. If you’re building software for police officers on patrol, schedule meetings with officers in their police cars. Not only are people are more likely to open up and discuss their opinions more freely when they’re in familiar surroundings, but you’ll also get a feel for their environment and any distractions.
Depending on your goals and budget, you may consider recording the interaction with screen recording software. Possibly, you might also consider setting up a camera to record the user’s body language, facial reactions, and their physical interactions with input devices. But you need to be aware that people act differently when they know they are being recorded. While recording a session offers the convenience of replaying and reanalyzing the recording as many times as you like, you should also not underestimate the amount of time it will take to review and analyze a batch of recordings.
Choosing goals for the session
Decide in advance what kind of data or learnings you are aiming to get. For instance, you may want to:
- Determine what percentage of users are able to carry out a task successfully
- Find places or situations where users get confused, hesitate, or don’t know to proceed
- Find places or situations where users tend to make the most errors
- Collect impressions and suggestions from users on what works well and what could be improved
- Collect judgements from users on the value, usefulness, attractiveness, and usability of the product
- Get feedback on how your product compares with competing products
If you collect metrics such as the number of errors made or the average time taken to perform a task, you can compare statistics across different batches of testing sessions to test whether changes to the product have actually led to measurable improvements.
Running the session
When you start a session with a participant, welcome them and briefly explain what you’re aiming to accomplish. If you will be conducting audio, video, or screen recording of the session, or if any personally identifying data will be collected, it is customary to have the participant understand and agree to this by signing a consent form.
Because they are being observed, participants often feel that they are being tested or quizzed, and participants can often become embarrassed and ashamed when they make a mistake or can’t figure out how to do something with the product. Reassure subjects that you’re not testing or evaluating them personally; instead, you’re testing and evaluating the product, and because the product is not yet perfect or complete, the goal is to find flaws and opportunities for improvement in the product. Explain that if the participant makes an error or gets stuck, it’s not their fault; rather, it’s a signal that the product needs to be improved.
How you run the session depends on what data you’re intending to collect. Typically, you will ask the user to accomplish one or more goals, and you’ll observe them as they explore the product and try to figure out how to how they go about doing it. Make sure you explain the goals or tasks clearly, but at the same time, try not to give too many clues as to how to do it (“leading” the user).
Users will often ask for help or seek acknowledgement that they’re on the right track, asking questions such as, “I am doing this right? Do I click here? What’s the next step?” How you offer assistance is up to you. When the user makes a false step, you may be tempted to jump in right away, but it’s better to observe and how and when the user detects the error and how they recover from it.
You might choose to ask participants to “think aloud” — that is, as they try to figure out how the product works, they should try to vocalize their inner thoughts: “I want to do a search for something. Where can I do a search? I don’t see a search box anywhere. Normally it would be up in this corner over here. Maybe there’s something under one of the menus? No, there’s no Search menu. Maybe under the Edit menu? I see a Find command, but is that what I want?” This kind of ongoing dialogue can provide useful insights, but many people find it uncomfortable and unnatural to do this. As well, if you’re trying to take notes by hand, you’ll never be able to write fast enough.
You can ask for critiques and suggestions at various stages, but also realize that not every user is in a position to give appropriate or good advice on issues like screen layout or interaction design.
Recording notes and observations
You will want to have a notepad handy where you can keep a log of the user’s actions, results, comments, questions, any long pauses indicating confusion, and so on. You should also keep a tally of errors and mistakes. If you see patterns emerging — users getting stuck at a certain point, or asking how to proceed — make a note and keep count of how many other participants encounter the same difficulty. To save time, consider preparing a template or chart you can fill out, and develop a list of short abbreviated codes to use to refer to recurring situations.
If you’re working with a high-fidelity prototype or the actual product, you might also use a stopwatch to time how long it takes a user to complete certain tasks. However, accurate timings are difficult if you’ve asked the user to “think aloud”, if questions and discussions are taking place, or if the user is pausing to let you take notes. Using a stopwatch will also put pressure on users, so again, reassure the user that it’s not a race and you’re not testing their personal performance.
If you find that your note-taking slows down the session, you may consider having another person join to take notes so you can concentrate on facilitating the session. But having multiple people managing the session can sometimes be distracting and overwhelming, and it can be unprofessional when the team members haven’t prepared and rehearsed their coordination ahead of time.
Be sure to thank your participant for their time and feedback. You might also ask participants to fill out a questionnaire afterwards. This gives you another chance to collect feedback (and it might give participants more time to think through their responses). If you ask for satisfaction ratings on, say, a scale of 1 to 10, you can collect quantitative data that you can compare with other batches of testing sessions.
Analyzing and communicating results
After running a batch of sessions, consolidate your notes and review any recordings. Tally and calculate any metrics, and compare any statistics to previous runs. By analyzing your notes and data, you can find problem areas, for which you can then recommend potential solutions. Put together the results and recommendations in a brief report for review in your project team.