Donald Norman, in his book The Design of Everyday Things, introduced several basic user interface design principles and concepts that are now considered critical for understanding why some designs are more usable and learnable than others:
Consistency
One of the major ways that people learn is by discovering patterns. New situations become more manageable when existing pattern knowledge can be applied to understanding how things work. Consistency is the key to helping users recognize and apply patterns.
Things that look similar should do similar things. For example, if we learn that protruding surfaces with labels on them are buttons that can be pressed, then the next time we see a new protruding surface with a label on it, we’ll tend to recognize it as a pressable button.
Likewise, behavior and conventions should be consistent across similar tasks and actions. QuickBooks makes a chime sound whenever a record is successfully saved, and this is consistent, no matter whether you’re editing a invoice, a cheque, or a quote.
Inconsistency causes confusion, because things don’t work the way the user expects them to. Forcing users to memorize exceptions to the rules increases the cognitive burden and causes resentment. Attention to consistency is important for instilling confidence in the product, because it gives the impression that there is a logical, rational, trustworthy designer behind the scenes.
For desktop and mobile applications, you should aim to understand and conform to the user interface guidelines for your operating system or platform. Consistency with these standard conventions reduces the number of new things a user needs to learn.
Visibility
Users discover what functions can be performed by visually inspecting the interface and seeing what controls are available. For tasks that involve a series of steps, having clearly-marked controls in a visible location can help the user figure out what to do next.
The principle of visibility suggests that usability and learnability are improved when the user can easily see what commands and options are available. Controls should be made clearly visible, rather than hidden, and should be placed where users would expect them to be. Placing controls in unexpected, out-of-the-way locations is tantamount to hiding them.
Functionality which does not have a visual representation can be hard to discover and find. For example, keyboard shortcuts save time for expert users, but when a keyboard shortcut is the only way to activate a command, then a new user will have no way of discovering it, except by accident, or by reading the reference documentation.
The principle of visibility should not necessarily be interpreted to mean that every possible function should have a prominent button on the screen — for any complex application, there would be so many buttons that the screen would become crowded and cluttered, and it would be difficult to find the right button. Pull-down menus are an example of a compromise: the commands are visible when the menus are opened, but remain tucked out of sight most of the time. And in a full-featured application, you may want to consider only presenting the commands and controls that are relevant to the user’s present context. In Photoshop, for example, one of the toolbars shows the settings for the current drawing tool and omits any settings that are irrelevant for that tool.
Affordance
An affordance is a visual attribute of an object or a control that gives the user clues as to how the object or control can be used or operated.
The standard example used for explaining affordance is the door handle. A round doorknob invites you to turn the knob. A flat plate invites you to push on the plate to push the door outwards. A graspable handle invites you to pull on the handle to pull the door open towards you.
Applying the concept of affordance to graphical user interfaces, you can use visual cues to make controls look clickable or touchable. One common technique is to make buttons and other controls look “three-dimensional” and raised off the screen by using colors to simulate light and shadows. Dials and sliders can be made to look like real-world controls. Whenever possible, you should use the standard controls (“widgets”) provided by the operating system or platform; this makes the controls easily recognizable because the same controls are used by other applications.
Design conventions are another means of providing affordance cues. For example, underlined blue text is a standard convention for representing a textual link on a webpage. There is nothing inherently obvious about underlined blue text that makes it mean “I’m a clickable link”, but it is a widely-used standard that users have learned.
In desktop systems with pointing devices, another way of showing affordance is to change the shape of the mouse pointer when the mouse pointer is moved over a control. Tooltips, or small pop-up messages that appear when the mouse pointer hovers over a control, can provide some additional assistance.
One particularly challenging thing to show affordances for is indicating that some element can be dragged-and-dropped, or that some element has a context menu available by right-clicking on it.
Mapping
Pressing a button or activating a control generally triggers the system to perform some function. There is a relationship, or mapping, between a control and its effects. You should always aim to make these mappings as clear and explicit as possible. You can do this by using descriptive labels or icons on buttons and menu items, and by using controls consistently (again, similar controls should have similar behavior and effects).
Controls should also be positioned in logical ways that match real-world objects or general conventions. For instance, it’s obvious that a slider control to adjust the left-right balance of stereo speakers should increase the volume of the left speaker when the slider is moved to the left. Or if you have an ordered list or a sequence of steps, these should obviously be positioned in left-to-right or top-to-bottom order.
The flight stick in an aircraft or flight simulator might be considered by some to have a non-conventional mapping. Pulling the stick downwards and towards you causes the aircraft’s nose to point upward, while pushing the stick up and away from you causes the aircraft’s nose to point downward. This is because the position of the flight stick controls the flaps on the wings, and if the flaps are down, this will cause the aircraft to climb. The mapping becomes natural for a trained pilot, but can initially seem backwards to new pilots.
Feedback
If you press a button and nothing seems to happen, you’re left wondering whether the button press actually registered. Should you try again? Or is there a delay between the button press and the expected action?
The principle of feedback suggests that you should give users confirmation that an action has been performed successfully (or unsuccessfully). We might distinguish between two types of feedback:
- Activational feedback is evidence that the control was activated successfully: a button was pressed, a menu option was selected, or a slider was moved to a new position. This evidence could be provided with visual feedback: an on-screen button can be animated to give the appearance of being depressed and released. Physical controls can provide tactile feedback; for instance, you can feel a button clicking as you press and release it. Auditory feedback could also be provided in the form of a sound effect.
- Behavioral feedback is evidence that the activation or adjustment of the control has now had some effect in the system; some action has been performed with some result. For example, in an e-mail client, after clicking the “Send” button, you may get a confirmation message in a pop-up dialog, and the e-mail will be listed under the “Sent” folder.
One business system I’ve encountered offered a menu option for generating a report. When this menu option was selected, nothing appeared to happen. Because no feedback was provided, it was unclear whether the report generation was triggered correctly, and if it was, it was unclear whether the report was generated successfully. It turned out that a report file was created in a certain location in the file system, but the system did not tell the user where to find this file. This system could be improved by giving the user feedback: at the minimum, a confirmation should be provided indicating that the report was created successfully, and better yet, the report should be automatically opened for viewing as soon as it is available.
Constraints
Interfaces must be designed with restrictions so that the system can never enter into an invalid state. Constraints, or restrictions, prevent invalid data from being entered and prevent invalid actions from being performed.
Constraints can take many forms. Here are some examples:
- A diagramming tool for drawing organizational charts will prevent the boxes and lines from being dragged-and-dropped and rearranged into configurations that are not semantically legal.
- Word processors disable the “Copy” and “Cut” commands when no text is currently selected.
- The dots-per-inch setting on a scanning application is often controlled by a slider that restricts the chosen value to be within a range such as 100 to 400 dpi. This is a good example of a control that can show a constraint visually.
Pingback: Adherence of User Interface to Norman’s Principles of Usability | appteam35
Pingback: Design Principles | Values and Ethics Tool Developer Blog
Pingback: Lucky Number 7: 7 Design Actions, Questions, and Principles | Digital Media Heaven with Evan