Architecting Usability » Usability Testing http://architectingusability.com a blog exploring User Experience design Wed, 02 Jan 2013 01:13:00 +0000 en-US hourly 1 https://wordpress.org/?v=4.1.40 How to conduct heuristic inspections for evaluating software usability http://architectingusability.com/2013/01/01/how-to-conduct-heuristic-inspections-software-usability/ http://architectingusability.com/2013/01/01/how-to-conduct-heuristic-inspections-software-usability/#comments Wed, 02 Jan 2013 01:13:00 +0000 http://architectingusability.com/?p=651 Continue reading ]]> Heuristics are “rule-of-thumb” design principles, rules, and characteristics that are stated in broad terms and are often difficult to specify precisely. Assessing whether a product exhibits the qualities embodied in a heuristic is thus a subjective affair.

If you inspect a prototype or product and systematically check whether it adheres to a set of heuristics, you are conducting what is called a heuristic inspection or heuristic evaluation. It is a simple, effective, and inexpensive means of identifying problems and defects and is an excellent first technique to use before moving on to more costly and involved methods such as user observation sessions.

It is usually best when a heuristic evaluation is carried out by an experienced usability specialist, but heuristic evaluations can also be very effectively when they are conducted by a team of individuals with diverse backgrounds (for example, domain experts, developers, and users).

To conduct a heuristic evaluation, you should choose several scenarios for various tasks that a user would perform. As you act out each of the steps of the task flows in the scenarios, consult the list of heuristics, and judge whether the interface conforms to each heuristic (if it is applicable).

Jakob Nielsen introduced the idea of heuristic evaluations, and his 1994 list of ten heuristics, reproduced below, is still the most commonly used set of heuristics today (Nielsen, 1994, p. 30):

Visibility of system status “The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.”
Match between system and the real world “The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.”
User control and freedom “Users often choose system functions by mistake and will need a clearly marked ‘emergency exit’ to leave the unwanted state without having to go through an extended dialog. Supports undo and redo.”
Consistency and standards “Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.”
Error prevention “Even better than a good error message is a careful design that prevents a problem from occurring in the first place.”
Recognition rather than recall “Make objects, actions, and options visible. The user should not have to remember information from one part of the dialog to another. Instructions or use of the system should be visible or easily retrievable whenever appropriate.”
Flexibility and efficiency of use “Accelerators — unseen by the novice user — may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.”
Aesthetic and minimalist design “Dialogs should not contain information that is irrelevant or rarely needed. Every extra unit of information in a dialog competes with the relevant units of information and diminishes their relative visibility.”
Help users recognize, diagnose, and recover from errors “Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.”
Help and documentation “Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.”

An obvious weakness of the heuristic inspection technique is that the inspectors are usually not the actual users. Biases, pre-existing knowledge, incorrect assumptions about how users go about tasks, and the skill or lack of skill of the inspectors are all factors that can skew the results of a heuristic inspection.

Heuristic inspections can also be combined with standards inspections or checklist inspections, where you inspect the interface and verify that it conforms to documents such as style guides, platform standards guides, or specific checklists devised by your project team. This can help ensure conformity and consistency throughout your application.

]]>
http://architectingusability.com/2013/01/01/how-to-conduct-heuristic-inspections-software-usability/feed/ 0
Focus groups as a usability evaluation technique http://architectingusability.com/2012/09/11/focus-groups-as-a-usability-evaluation-technique/ http://architectingusability.com/2012/09/11/focus-groups-as-a-usability-evaluation-technique/#comments Tue, 11 Sep 2012 22:26:12 +0000 http://architectingusability.com/?p=646 Continue reading ]]> A focus group brings together a group of users or other stakeholders to participate in a discussion of preprepared questions, led by a facilitator. A focus group could be used as an usability evaluation technique if the group is shown a demonstration of a product or prototype, and then the group’s impressions and opinions are discussed.

Focus groups might appear to be a convenient, time-saving way to get feedback from eight or ten people in a single session. In practice, however, the technique is not particularly reliable. Watching a demonstration is not the same as having the opportunity to interact with the product hands-on. And group dynamics can vary widely; different groups can come up with completely different conclusions.

Focus group discussions often tend to be dominated by one or two loud and opinionated participants, and the quieter participants often say little and go along with the group consensus. There is also the risk that the facilitator may consciously or unconsciously lead the discussion towards a particular outcome. If you choose to use focus groups, you should use them with caution and be aware of the limitations.

]]>
http://architectingusability.com/2012/09/11/focus-groups-as-a-usability-evaluation-technique/feed/ 0
Analytics as a usability evaluation technique http://architectingusability.com/2012/09/11/analytics-as-a-usability-evaluation-technique/ http://architectingusability.com/2012/09/11/analytics-as-a-usability-evaluation-technique/#comments Tue, 11 Sep 2012 20:59:05 +0000 http://architectingusability.com/?p=641 Continue reading ]]> Once your product has been released, understanding how it is actually being used very valuable. Analytics refers to the use of instrumentation to record data on users’ activities, followed by the analysis the collected data to detect trends and patterns. This data can then validate your assumptions as to which functions are being used most frequently and which parts of the product are seldom or never used, and you may be able to identify where users are running into trouble.

Some examples of the type of data that you can collect through analytics include:

  • Pages or screens visited, and time spent on each
  • Functions used, buttons and controls pressed, menu options selected, shortcut keystrokes pressed, etc.
  • Errors and failures
  • Duration of usage sessions

Websites and web apps are well suited to logging and tracking user activities. Many web analytics packages and services can provide additional contextual data such as the user’s geographic location, whether they have visited the site before, and what search terms were used to find the site if the user visited via a search engine.

Desktop and mobile apps can also collect usage data, but because of privacy concerns and regulations, it is important to declare to the user what data you intend to collect, and you must gain the user’s permission before transmitting any usage data.

No matter what type of product you offer, privacy concerns are important and you must ensure that your practices and Terms of Service follow the legal regulations appropriate for your jurisdiction. Tracking abstract usage data such as button presses are generally acceptable, but it is usually considered unacceptable to pry into content the user creates with the product.

]]>
http://architectingusability.com/2012/09/11/analytics-as-a-usability-evaluation-technique/feed/ 0
Understanding the process of user interface design http://architectingusability.com/2012/06/29/understanding-the-process-of-user-interface-design/ http://architectingusability.com/2012/06/29/understanding-the-process-of-user-interface-design/#comments Fri, 29 Jun 2012 13:32:53 +0000 http://architectingusability.com/?p=488 Continue reading ]]> Designing a user interface for a non-trivial product is a complex task.

One traditional approach to designing and building software products was the waterfall model, where requirements are first gathered and written up in specifications documents. These are then handed off to designers, who create designs and write design specifications. These are then handed off to developers, who build the product. The product is finally handed off to testers, who verify that the product matches the specifications.

This sounds fine in theory — it’s a very logical, rational decomposition — but for large, complex products, this approach never really seems to work very efficiently. For complex projects, it’s never possible for analysts and designers and programmers to get everything correct and complete on the first try, and so waterfall projects inevitably tend to break down into a chaotic scene of documents being passed back and forth between groups for correction. And since software projects span months or years, it’s very frequent that the requirements will change during the course of the project, meaning that by the time the product is finally built and released, it may no longer actually meet the needs of the users and stakeholders.

An effective way of bringing some order to this chaos is to recognize that complex analysis, design, and development work is never done completely or correctly on the first attempt; it takes many iterations of reviewing, revising, and testing to get it right.

An iterative approach to design and construction breaks the project into many short, structured cycles of work. In each pass around the cycle — each iteration — the work products get better and better and more complete. An advantage to this approach is that you get a basic functioning version of the product available for testing very early on in the project, and this early product can be used to discuss and further refine requirements with the project stakeholders.

Attempts to illustrate an iteration of the design cycle usually end up looking something like this:

The Design Cycle (diagram)

The Design Cycle

This diagram is unsatisfying, though: it suggests that the activities are separate and take place sequentially, and this is not always the case. There is often constant, fluid switching between the different activities, and team members will usually be working on different activities simultaneously in parallel.

In addition, the nature of different products can enable various different design approaches:

  • For products with formal processes and very specific externally-imposed requirements, such as a tax calculator, requirements analysis and specification usually have to be figured out fairly thoroughly before detailed design can proceed.
  • On the other end of the spectrum, products such as games have no real requirements — just about anything goes, design-wise — and so requirements analysis virtually disappears.
  • Most products fit somewhere in the middle, and requirements analysis and design proceed together in a tightly meshed fashion. Sometimes requirements aren’t formally recorded at all, and instead the design is simply continually adjusted to match the new learnings about how the product should work. So in these cases, the Understand requirements and Design activities merge together.

And for products that lend themselves to rapid prototyping, often no formal design documentation is ever recorded. The prototype is the representation of the design, and so the Design and Build activities merge together.

The User-Centered Design approach recommends that you involve users in requirements gathering, and in the usability testing and evaluation of designs, prototypes, and the actual product.

In other blog posts, we’ll take a closer look at the activities in the design cycle. We’ll examine requirements analysis and validation, the process of design, prototyping, evaluating designs and prototypes, and conducting usability testing.

]]>
http://architectingusability.com/2012/06/29/understanding-the-process-of-user-interface-design/feed/ 0
How to conduct user observation sessions http://architectingusability.com/2012/06/14/how-to-conduct-user-observation-sessions/ http://architectingusability.com/2012/06/14/how-to-conduct-user-observation-sessions/#comments Thu, 14 Jun 2012 14:03:47 +0000 http://architectingusability.com/?p=414 Continue reading ]]> Watching real users use your product or prototype is really the only way to truly evaluate whether your design is sufficiently usable and learnable. User observation sessions quickly reveal where users have problems figuring out your product. Let’s take a look at running effective user observation sessions.

The environment

Some textbooks recommend setting up a formal usability testing lab with one-way mirrors and multiple cameras, and insist upon highly structured sessions with a full team of facilitators, observers and recorders. If you can afford this, then great, but these ideas make user observation seem more complicated and mysterious than it really is.

Not only are formal laboratory environments expensive, but they can make participants feel uncomfortable. Being watched by a team of people and recorded by cameras will make participants nervous, as if they’re performing for an audience.

To make people feel more relaxed, you can get great results simply sitting one-on-one with a participant in front of a laptop in a neutral and comfortable setting like a coffee shop. If you’re building a product that will be used in a particular environment, try to do the session in that environment: If you’re building enterprise software, sit down together at your users’ desks. If you’re building software for police officers on patrol, schedule meetings with officers in their police cars. Not only are people are more likely to open up and discuss their opinions more freely when they’re in familiar surroundings, but you’ll also get a feel for their environment and any distractions.

Depending on your goals and budget, you may consider recording the interaction with screen recording software. Possibly, you might also consider setting up a camera to record the user’s body language, facial reactions, and their physical interactions with input devices. But you need to be aware that people act differently when they know they are being recorded. While recording a session offers the convenience of replaying and reanalyzing the recording as many times as you like, you should also not underestimate the amount of time it will take to review and analyze a batch of recordings.

Choosing goals for the session

Decide in advance what kind of data or learnings you are aiming to get. For instance, you may want to:

  • Determine what percentage of users are able to carry out a task successfully
  • Find places or situations where users get confused, hesitate, or don’t know to proceed
  • Find places or situations where users tend to make the most errors
  • Collect impressions and suggestions from users on what works well and what could be improved
  • Collect judgements from users on the value, usefulness, attractiveness, and usability of the product
  • Get feedback on how your product compares with competing products

If you collect metrics such as the number of errors made or the average time taken to perform a task, you can compare statistics across different batches of testing sessions to test whether changes to the product have actually led to measurable improvements.

Running the session

When you start a session with a participant, welcome them and briefly explain what you’re aiming to accomplish. If you will be conducting audio, video, or screen recording of the session, or if any personally identifying data will be collected, it is customary to have the participant understand and agree to this by signing a consent form.

Because they are being observed, participants often feel that they are being tested or quizzed, and participants can often become embarrassed and ashamed when they make a mistake or can’t figure out how to do something with the product. Reassure subjects that you’re not testing or evaluating them personally; instead, you’re testing and evaluating the product, and because the product is not yet perfect or complete, the goal is to find flaws and opportunities for improvement in the product. Explain that if the participant makes an error or gets stuck, it’s not their fault; rather, it’s a signal that the product needs to be improved.

How you run the session depends on what data you’re intending to collect. Typically, you will ask the user to accomplish one or more goals, and you’ll observe them as they explore the product and try to figure out how to how they go about doing it. Make sure you explain the goals or tasks clearly, but at the same time, try not to give too many clues as to how to do it (“leading” the user).

Users will often ask for help or seek acknowledgement that they’re on the right track, asking questions such as, “I am doing this right? Do I click here? What’s the next step?” How you offer assistance is up to you. When the user makes a false step, you may be tempted to jump in right away, but it’s better to observe and how and when the user detects the error and how they recover from it.

You might choose to ask participants to “think aloud” — that is, as they try to figure out how the product works, they should try to vocalize their inner thoughts: “I want to do a search for something. Where can I do a search? I don’t see a search box anywhere. Normally it would be up in this corner over here. Maybe there’s something under one of the menus? No, there’s no Search menu. Maybe under the Edit menu? I see a Find command, but is that what I want?”  This kind of ongoing dialogue can provide useful insights, but many people find it uncomfortable and unnatural to do this. As well, if you’re trying to take notes by hand, you’ll never be able to write fast enough.

You can ask for critiques and suggestions at various stages, but also realize that not every user is in a position to give appropriate or good advice on issues like screen layout or interaction design.

Recording notes and observations

You will want to have a notepad handy where you can keep a log of the user’s actions, results, comments, questions, any long pauses indicating confusion, and so on. You should also keep a tally of errors and mistakes. If you see patterns emerging — users getting stuck at a certain point, or asking how to proceed — make a note and keep count of how many other participants encounter the same difficulty. To save time, consider preparing a template or chart you can fill out, and develop a list of short abbreviated codes to use to refer to recurring situations.

If you’re working with a high-fidelity prototype or the actual product, you might also use a stopwatch to time how long it takes a user to complete certain tasks. However, accurate timings are difficult if you’ve asked the user to “think aloud”, if questions and discussions are taking place, or if the user is pausing to let you take notes. Using a stopwatch will also put pressure on users, so again, reassure the user that it’s not a race and you’re not testing their personal performance.

If you find that your note-taking slows down the session, you may consider having another person join to take notes so you can concentrate on facilitating the session. But having multiple people managing the session can sometimes be distracting and overwhelming, and it can be unprofessional when the team members haven’t prepared and rehearsed their coordination ahead of time.

Afterwards

Be sure to thank your participant for their time and feedback. You might also ask participants to fill out a questionnaire afterwards. This gives you another chance to collect feedback (and it might give participants more time to think through their responses). If you ask for satisfaction ratings on, say, a scale of 1 to 10, you can collect quantitative data that you can compare with other batches of testing sessions.

Analyzing and communicating results

After running a batch of sessions, consolidate your notes and review any recordings. Tally and calculate any metrics, and compare any statistics to previous runs. By analyzing your notes and data, you can find problem areas, for which you can then recommend potential solutions. Put together the results and recommendations in a brief report for review in your project team.

 

]]>
http://architectingusability.com/2012/06/14/how-to-conduct-user-observation-sessions/feed/ 0
How to recruit users for usability testing http://architectingusability.com/2012/06/13/how-to-recruit-users-for-usability-testing/ http://architectingusability.com/2012/06/13/how-to-recruit-users-for-usability-testing/#comments Wed, 13 Jun 2012 13:58:33 +0000 http://architectingusability.com/?p=410 Continue reading ]]> To conduct effective usability tests, you need to find real users whom you can observe while they use your product.

If you have a consumer product intended for sale to the general public, you’ll need to make sure that your user tests involve a matching diversity of people representative of your target audience.

If you have a specialized niche product, your pool of potential users may be small, but your potential subjects will be more motivated to participate, as your product promises to solve their particular problems and is tailored to their needs. You may have a harder time finding enough suitable users in your local area and so you may need to resort to “distance testing” via teleconferencing and screen-sharing software. Industry publications, professional associations, and discussion forums catering to your target audience can be useful for recruiting suitable participants.

Here are some ideas for sources of potential users:

  • Friends and family
  • Employees in your development team or department
  • Employees from elsewhere in your organization
  • Your existing customers
  • Subscribers to your company/product newsletter or blog
  • People at trade shows and conventions
  • People in your professional network
  • People recruited through advertisements

Recruiting your friends, family, and coworkers is often easier than other methods, but statisticians call this convenience sampling, and convenience samples introduce biases into your results. Your participants may not accurately represent the cross-section of users who will actually be buying and using your product, and so you may draw incorrect conclusions from your observations and usability tests.

If you solicit participants from the general public via advertising, you’ll usually need to offer an incentive, usually cash. But be aware that this can attract a certain kind of people. And there are many people whom you may want to reach but who will never respond: Not many high-powered lawyers earning $300 per hour will take an hour or two out of their busy schedule for a $50 gift certificate, for instance. And introverted individuals are less likely to sign up for usability testing sessions. Again, the main point here is that you need to make sure that the people you’re recruiting are a reasonable sample of your target audience, and if you suspect your sample is not representative, then you need to be aware of potential biases.

Recruiting participants and scheduling meetings can be time-consuming, so you may want to delegate this to an assistant. You also need to plan for the fact that a shockingly large percentage of people will not show up to their appointments. Reminder phone calls the day before the appointment can help, though.

How many users do you need for a usability testing study? One or two participants is too little (though better than nothing); ten is sometimes too many as you’ll usually see patterns emerging by then. Five to eight is a good target to aim for.

]]>
http://architectingusability.com/2012/06/13/how-to-recruit-users-for-usability-testing/feed/ 0