User based evaluation of a working system, where the primary
objective is to identify usability problems.
- Major usability problems are identified.
- An understanding is gained of why the user has difficulties with the system.
- Approximate measures can be obtained for the users' effectiveness, efficiency and satisfaction.
- It is important that the users, tasks and environment
used for the test are representative of the intended context
- Select the most important tasks and user group(s) to be
tested (e.g. the most frequent or the most critical).
- Select users who are representative of the user group(s).
3-5 users are sufficient to identify the main issues. 8
or more users of each type are required for reliable measures.
For complex systems such as an e-commerce web site, larger
numbers may be require to explore all aspects of the system.
- Consider using user-defined tasks, where users are asked
to define their own goals prior to the evaluation session.
- Produce task scenarios and
input data and write instructions for the user (tell the
user what to achieve, not how to do it).
- Plan sessions allowing time for giving instructions, running
the test, answering a questionnaire, and a post-test interview.
- Invite developers to observe the sessions if possible.
An alternative is to videotape the sessions, and show developers
edited clips of the main issues.
- Two administrators are normally required to share the
activities of instructing and interviewing the user, operating
video equipment (if used), noting problems, and speaking
to any observers.
- If possible use one room for testing, linked by video
to another room for observation.
- If usability measures are required, observe the user without
making any comments.
- If measures are not required, prompt the user to explain
their interpretation of the contents of each screen and
their reason for making choices.
- Welcome the user, and give the task instructions.
- Do not give any hints or assistance unless the user is
unable to complete the task.
- Observe the interaction and note any problems encountered.
- If required time each task.
- Either ask the user to think aloud, or prompt the user
to explain their interpretation of the contents of each
screen and their reason for making choices.
- At the end of the session, ask the user to complete a
- Interview the user to confirm they are representative
of the intended user group, to gain general opinions, and
to ask about specific problems encountered.
- Assess the results of the task for accuracy and completeness.
- Produce a list of usability problems, categorised by importance
(use sticky notes to sort the
problems), and an overview of the types of problems encountered.
- Arrange a meeting with the project manager and developer
to discuss whether and how each problem can be fixed.
- If measures have been taken, summarise the results of
the satisfaction questionnaire, task time and effectiveness
(accuracy and completeness) measures.
- If a full report is required, the Common
Industry Format provides a good structure.
More information on usability testing:
for usability testing and usability labs.
Performance Measurement Method provides detailed instructions
for measuring effectiveness, efficiency and satisfaction.
The degree of formality of the session depends on the relative
importance of understanding usability problems and obtaining
usability measures. To obtain accurate performance
measures the more formal version of the procedure should
be used (where the context of evaluation carefully matches
the intended context of use, and there is no interaction with
the user during testing). At early stages and with complex
systems (such as web sites) most benefit is obtained by participatory
evaluation, obtaining a detailed understanding of how
the user is thinking. Later in development the user may just
be asked to think aloud in order to get a more realistic assessment
of their behaviour.
The user based evaluation can be complemented by expert
or heuristic evaluation.
Once improvements have been made, evaluate the new version
of the system, or if it is the final evaluation consider using
Dumas, JS, and Redish, Janice, A. (1999) Practical Guide
to Usability Testing, Intellect Books.
Rubin, Jeffrey (1994) Handbook of Usability Testing. John
Wiley and Sons, New York, NY
©UsabilityNet 2006. Reproduction permitted provided the source is acknowledged.