Heuristic evaluation is a form of usability inspection where usability specialists judge whether each element of a user interface follows a list of established usability heuristics. Expert evaluation is similar, but does not use specific heuristics.
Usually two to three analysts evaluate the system with reference to established guidelines or principles, noting down their observations and often ranking them in order of severity. The analysts are usually experts in human factors or HCI, but others, less experienced have also been shown to report valid problems.
A heuristic or expert evaluation can be conducted at various stages of the development lifecycle, although it is preferable to have already performed some form of context analysis to help the experts focus on the circumstances of actual or intended product usage.
This method is to identify usability problems based on established human factors principles. The method will provide recommendations for design improvements. However, as the method relies on experts, the output will naturally emphasise interface functionality and design rather than the properties of the interaction between an actual user and the product.
The panel of experts must be established in good time for the evaluation. The material and the equipment for the demonstration should also be in place. All analysts need to have sufficient time to become familiar with the product in question along with intended task scenarios. They should operate by an agreed set of evaluative criteria.
The experts should be aware of any relevant contextual information relating to the intended user group, tasks and usage of the product. A heuristics briefing can be held to ensure agreement on a relevant set of criteria for the evaluation although this might be omitted if the experts are familiar with the method and operate by a known set of criteria.
The experts then work with the system preferably using mock tasks and record their observations as a list of problems. If two or more experts are assessing the system, they should not communicate with one another until the assessment is complete. After the assessment period, the analysts can collate the problem lists and the individual items can be rated for severity and/or safety criticality.
A list of identified problems, which may be prioritised with regard to severity and/or safety criticality is produced.
In terms of summative output the number of found problems, the estimated proportion of found problems compared to the theoretical total, and the estimated number of new problems expected to be found by including a specified number of new experts in the evaluation can also be provided.
A report detailing the identified problems is written and fed back to the development team. The report should clearly define the ranking scheme used if the problem lists have been prioritised.
Nielsen, Jakob. How to Conduct a Heuristic Evaluation
Three to five experts are recommended for a thorough evaluation. A quick review by one expert (often without reference to specific heuristics) is usual before a user-based evaluation to identify potential problems.
If usability experts are not available, other project members can be trained to use the method, which is useful in sensitising project members to usability issues.
Bias, R.G. and Mayhew, D.J. (Eds.). Cost justifying usability. Academic Press, 1994, pp.251-254.
Nielsen, J. (1992). Finding usability problems through heuristic evaluation. Proc. ACM CHI'92 (Monterey, CA, 3-7 May), pp. 373-380.
Nielsen, J. & Landauer, T. K. (1993). A Mathematical Model of Finding of Usability Problems. Proc. INTERCHI '93 (Amsterdam NL 24-29 April),
©UsabilityNet 2006. Reproduction permitted provided the source is acknowledged.