Architecting Usability » Uncategorized http://architectingusability.com a blog exploring User Experience design Wed, 02 Jan 2013 01:13:00 +0000 en-US hourly 1 https://wordpress.org/?v=4.1.40 How to conduct heuristic inspections for evaluating software usability http://architectingusability.com/2013/01/01/how-to-conduct-heuristic-inspections-software-usability/ http://architectingusability.com/2013/01/01/how-to-conduct-heuristic-inspections-software-usability/#comments Wed, 02 Jan 2013 01:13:00 +0000 http://architectingusability.com/?p=651 Continue reading ]]> Heuristics are “rule-of-thumb” design principles, rules, and characteristics that are stated in broad terms and are often difficult to specify precisely. Assessing whether a product exhibits the qualities embodied in a heuristic is thus a subjective affair.

If you inspect a prototype or product and systematically check whether it adheres to a set of heuristics, you are conducting what is called a heuristic inspection or heuristic evaluation. It is a simple, effective, and inexpensive means of identifying problems and defects and is an excellent first technique to use before moving on to more costly and involved methods such as user observation sessions.

It is usually best when a heuristic evaluation is carried out by an experienced usability specialist, but heuristic evaluations can also be very effectively when they are conducted by a team of individuals with diverse backgrounds (for example, domain experts, developers, and users).

To conduct a heuristic evaluation, you should choose several scenarios for various tasks that a user would perform. As you act out each of the steps of the task flows in the scenarios, consult the list of heuristics, and judge whether the interface conforms to each heuristic (if it is applicable).

Jakob Nielsen introduced the idea of heuristic evaluations, and his 1994 list of ten heuristics, reproduced below, is still the most commonly used set of heuristics today (Nielsen, 1994, p. 30):

Visibility of system status “The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.”
Match between system and the real world “The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.”
User control and freedom “Users often choose system functions by mistake and will need a clearly marked ‘emergency exit’ to leave the unwanted state without having to go through an extended dialog. Supports undo and redo.”
Consistency and standards “Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.”
Error prevention “Even better than a good error message is a careful design that prevents a problem from occurring in the first place.”
Recognition rather than recall “Make objects, actions, and options visible. The user should not have to remember information from one part of the dialog to another. Instructions or use of the system should be visible or easily retrievable whenever appropriate.”
Flexibility and efficiency of use “Accelerators — unseen by the novice user — may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.”
Aesthetic and minimalist design “Dialogs should not contain information that is irrelevant or rarely needed. Every extra unit of information in a dialog competes with the relevant units of information and diminishes their relative visibility.”
Help users recognize, diagnose, and recover from errors “Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.”
Help and documentation “Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.”

An obvious weakness of the heuristic inspection technique is that the inspectors are usually not the actual users. Biases, pre-existing knowledge, incorrect assumptions about how users go about tasks, and the skill or lack of skill of the inspectors are all factors that can skew the results of a heuristic inspection.

Heuristic inspections can also be combined with standards inspections or checklist inspections, where you inspect the interface and verify that it conforms to documents such as style guides, platform standards guides, or specific checklists devised by your project team. This can help ensure conformity and consistency throughout your application.

]]>
http://architectingusability.com/2013/01/01/how-to-conduct-heuristic-inspections-software-usability/feed/ 0
The impact of hardware devices on software ergonomics http://architectingusability.com/2012/08/19/the-impact-of-hardware-devices-on-software-ergonomics/ http://architectingusability.com/2012/08/19/the-impact-of-hardware-devices-on-software-ergonomics/#comments Sun, 19 Aug 2012 11:42:12 +0000 http://architectingusability.com/?p=624 Continue reading ]]> A product that is ergonomic is designed in a way that helps reduces physical discomfort, stress, strain, fatigue, and potential injury during operation. While ergonomics is usually associated with physical products, the design of the a software application’s interface also influences the way the user physically interacts with the hardware device on which the application runs. And ergonomics also extends to the cognitive realm, as we seek to design software that helps people work more productively and comfortably, by reducing the dependence on memorization, for example.

To create an ergonomically sound software application, it is important to first think about the properties and the context of use of the hardware device on which the application will run. For the majority of consumer and business applications, there are currently three main forms of general-purpose personal computing devices:

  • Desktop and laptop computers with a screen, keyboard, and a pointing device such as a mouse or trackpad, are comfortable for users sitting at a desk for a long period of time.
  • Tablet devices with touchscreens have a form factor that is comfortable for sitting and consuming content (reading webpages, watching movies, etc.), but entering information and creating content via touch-screen control is generally not as comfortable and convenient as with a desktop machine.
  • Mobile phones (and similar devices such as portable music players) are usually used for short bursts of activity while on the go.

For more specialized applications, you might have a combination of software and custom-designed, special-purpose hardware. Examples include a machine that sells subway tickets, an automated teller machine, or an industrial thermostat control. If you are a designer for such a product, you may have responsibility for designing the form of the physical interface in addition to the software.

To give you an idea of some of the practical ergonomic aspects that you should keep in mind when designing for different devices, let’s compare desktop computers with touchscreen tablets:

  • Tablet devices with multi-touch touchscreens are pleasant and fun to use from an interaction standpoint because you can interact directly with on-screen elements by touching them with your finger. Desktop machines (as of this writing) generally don’t offer touchscreens, as reaching your arm out to the monitor places strain on the arm and shoulder muscles and would quickly become physically tiring. Desktop setups thus rely on pointing devices such the mouse or trackpads. These pointing devices introduce a level of indirection, however: moving the pointing device moves a cursor on the screen.
  • On desktop systems, there is a pointing device cursor (mouse arrow), whereas touchscreen devices have no such cursor. Some mouse gestures, like hovering the cursor over a control, thus have no counterpart in touchscreen systems. On both desktop and touchscreen systems, however, a text cursor (caret) appears when a text field receives the focus.
  • While a mouse may have multiple buttons, and clicks can be combined with holding down modifier keys (Control/Alt/Command/Shift), touchscreens don’t offer as many options. When you drag your finger across the screen, is it to be interpreted as a scrolling gesture, or an attempt to drag and drop an object on the screen? Cut-and-paste and right-clicking to get a context menu are easy on a desktop machine, but on a tablet, such operations require double-touch or touch-and-hold gestures that are not always evident.
  • Fingers range in size substantially; young children have small, narrow fingertips, whereas some men have very thick, fat fingers. Touchscreen buttons and icons thus must be large enough to accommodate “blunt” presses without triggering other nearby controls. In contrast, the mouse arrow allows pixel-precise pointing, and so buttons and icons can be substantially smaller on desktop applications than on touchscreen devices.
  • When the user is touching something on the screen, the user’s finger and hand will obscure part of the screen, so you have to be careful about what you display and where, so that important information is not hidden. When pressing an on-screen button, the user’s fingertip will obscure the button being pressed. Because button presses don’t always “register”, users seek visual feedback to see that the button press worked, and so you either need to make the buttons large enough so that the animation of the button being depressed is visible, or you should give some other clue when the user retracts the finger to show that the button was pressed (maybe pressing a Next button makes the application navigate to the next screen, which is very clear feedback that the button press was successful). Auditory feedback, like a clicking sound, can also be useful as a cue that the button was pressed successfully.
  • Mobile devices and tablet devices are often held by the user in one hand while standing, and so the user has only the other hand free to operate the touchscreen.

When designing a product, understanding the constraints and limitations, as well as the opportunities, of the hardware devices the software will run on will help you design appropriate and comfortable interactions.

 

]]>
http://architectingusability.com/2012/08/19/the-impact-of-hardware-devices-on-software-ergonomics/feed/ 1
Designing an interaction framework for your application’s tasks http://architectingusability.com/2012/07/17/designing-an-interaction-framework-for-your-applications-tasks/ http://architectingusability.com/2012/07/17/designing-an-interaction-framework-for-your-applications-tasks/#comments Wed, 18 Jul 2012 04:32:48 +0000 http://architectingusability.com/?p=588 Continue reading ]]> Many applications are centered around a set of features, tasks, actions, operations, business processes, or use cases that share a similar pattern of interaction. For example:

  • A paint program has a toolbar or palette with various drawing tools. Clicking on a tool selects it, and then the user operates on the canvas with that tool until a different tool is selected.
  • A game might have a number of levels. Each level has a different map, but all of the levels have essentially the same gameplay, scoring, and success criteria for moving on to the next level.
  • A workflow-driven human resources management system might have different business processes for business events like scheduling job interviews, hiring an employee, recording employee evaluations, or adjusting employee benefits. Each business process can consist of multiple stages or subtasks that require action and approval by different users. Each business process is started by selecting it from a menu, and a business process will have an “active” status until a terminating condition is reached.

If your application has a set of similar tasks, you first will want to create a list to keep track of them.

You can then design an interaction framework that describes the commonalities of the user interface and behavior for those tasks.

Some of the issues you should consider include:

  • the means by which the tasks are started or triggered (e.g., selection from a menu);
  • the authorizations for which tasks can be initiated by which groups of users;
  • conditions under which the task can be activated, or cases where it may be disabled;
  • how the task is ended or deemed to be complete;
  • whether the initiation or end of a task changes any statuses or modes;
  • whether the end of the task leads to follow-up tasks; and
  • the effect that the task has on the data in the system; for example, upon task completion, the data may be saved persistently, whereas if the task is abandoned or cancelled, the data will not be saved. (These types of considerations can form part of the transaction/persistence concept.)

Designing an interaction framework helps ensure that you really understand how your application fundamentally works. It ensures consistency across similar tasks, which helps users perceive patterns and form correct mental models.

By documenting the commonalities amongst the tasks in an interaction framework, it also saves you from having to re-document the same aspects for each individual task. The interaction framework will also be critical for helping the development team design and build the technical “platform” on which the various tasks can be implemented.

]]>
http://architectingusability.com/2012/07/17/designing-an-interaction-framework-for-your-applications-tasks/feed/ 0
Understanding the technology framework for building your product’s user interface http://architectingusability.com/2012/06/23/understanding-the-technology-framework-for-building-your-products-user-interface/ http://architectingusability.com/2012/06/23/understanding-the-technology-framework-for-building-your-products-user-interface/#comments Sat, 23 Jun 2012 13:05:01 +0000 http://architectingusability.com/?p=470 Continue reading ]]> If you are designing the user interface for an application, you will likely begin with rough conceptual sketches, but at a certain point, in order to create detailed designs and high-fidelity prototypes, you will need to know what software framework or technology will be used for building the user interface.

The user interface framework will provide user interface controls or “widgets” — buttons, text fields, drop-down boxes, and so on. Knowing what set of controls you have to work with and what they look like is obviously important for design and prototyping. For the purposes of visual design, it is also good to know the degree to which the look-and-feel of the controls can be adjusted and customized, and what mechanisms are used for managing the page layout.

The technology framework that will be used for implementing the user interface can impose constraints on your designs. In particular, different web application frameworks can vary widely in their capabilities. For instance, some frameworks offer the ability to present modal popup dialogs, while others do not; older frameworks may not support partial page refreshes, requiring entire pages to be reloaded to show new data. The Oracle ADF framework, as of the current writing, does not offer any means for disabling or hiding options in pull-down menus.

If the framework chosen for your product has limitations, you will need to be aware of them and find ways to work around them — but these workarounds can often impact usability. If the problems are serious enough, you may need to reconsider the choice of framework, and it’s better to discover and decide this early on in the project, rather than later, after most of the product has already been built. Thus getting a good understanding the framework and its capabilities and limitations is a critical early step in user interface design.

In some projects, UX designers may have some input into which user interface framework will be chosen for the project. But in most projects, technical architects choose the technology stack — the set of frameworks that will be used to develop the software, including the user interface layer — based on technical considerations, cost analyses, political factors, and sometimes, personal preference. The UX designer, who is typically brought in at a later stage in the project, is then left to design interfaces that are implementable with the chosen technology.

The choice of user interface framework should be decided upon careful consideration of the requirements, or at least the best known understanding of the requirements at the early stage of the project, and ideally a user experience designer should be involved in determining those requirements.

]]>
http://architectingusability.com/2012/06/23/understanding-the-technology-framework-for-building-your-products-user-interface/feed/ 0