(testing & post-release)
Subjective assessment tells the evaluator how the users feel
about the software being tested. This is distinct from how
efficiently or effectively they perform with the software.
The usual method of assessment is to used a standardised opinion
questionnaire to avoid criticisms of subjectivity.
- In a discretionary use scenario, user satisfaction is
most probably the largest single key factor which will influence
the usersí decision whether or not to continue with the
software (other key factors may include price, technology,
and brand loyalty).
- In a mandatory use scenario, poor satisfaction leads to
absenteeism, fast staff turnover, and unrelated complaints
from the workforce.
- Subjective Assessment complements data from efficiency
and effectiveness measures.
- Subjective assessment usually produces a list of satisfying
and unsatisfying software features which is especially useful
if testing is taking place during development.
This method gives the evaluator information about how the
users feel about using the software being evaluated. This
should be distinguished from:
- how well they perform with the software (effectiveness)
- how efficiently they work with the software (efficiency)
It is customary to use a close-ended questionnaire if one
is available, in order to produce quantitative data, otherwise
the results of the activity can be vague and open to interpretation.
At worst, a critical incidents type technique may be used.
Identify the questionnaire you will use. This is not the
time to start developing your own questionnaire. See the FAQ
about questionnaires (see below) for more information on this
There are two scenarios in which subjective assessment may
- respondents are asked to fill out a questionnaire immediately
after a usability testing session;
- respondents are sent a questionnaire by post or email
and are asked to rate the product in the light of their
experience with it using the questionnaire.
A variant of (2) is that a supervisor or manager distributes
questionnaires to their work-group, collects them when completed,
and returns them to the evaluator. In either case, make a
list of users who will fill out the questionnaire.
You will most probably also need a "screening questionnaire"
to get some background data on each respondent (e.g. computer
experience, job level, frequency of use of the software being
evaluated). This may come out of the Context of Use study.
An example of a general-purpose screening questionnaire may
be found on the SUMI homepage.
It is usually not enough to discover satisfaction levels.
You should also find out what features of the software give
rise to unprecedentedly high or low levels of satisfaction.
To do this you should ideally have access to a smaller sub-sample
of your original larger sample for a second stage of testing,
involving an interview. Some questionnaires will give you
quite good diagnostic data and so this second stage is not
as important if you use one of them.
Ensure that all your targeted population has been given a
questionnaire to fill out. If you are doing a mail-shot, expect
approximately 20% response rate. This may be higher if you
are doing it via the web. You can increase your response rate
to nearly 100% by doing reminder personal calls. If you are
using the questionnaire as part of a lab test, ensure that
the questionnaires as filled out can be associated with all
the other data (time scores, tapes, sample outputs etc.) from
the respondents - so you donít have questionnaires which you
cannot match up with the rest of the data.
It is usual not to prompt the respondent how to reply. If
a respondent complains that a question is inapplicable or
wrong, tell them words to the effect that it is up to them
to make their own judgement about each question, and that
there are no right or wrong answers. Encourage them not to
miss out questions.
Subjective questionnaires usually have a scoring scheme since
they are mostly based on Likert-style scaling techniques (see
the FAQ on questionnaires for more information about this.)
In fact, for most questionnaires, you simply sum the obtained
scores for each respondent. Some questionnaires have different
sub-scales which must be summed independently. The SUMI and
WAMMI questionnaires have their own scoring software which
put the scored data into a standard report.
In the readable body of the report, include
- summary statistics
- diagnostic data on the software features which in your
opinion influenced the subjective assessment.
Include the full scored data in an appendix.
The SUMI user handbook gives detailed recommendations and
methods for carrying out a user satisfaction survey which
is applicable to many survey questionnaires, not just SUMI.
The following is a list of questionnaires available:
SUMI (see sumi.ucc.ie)
WAMMI (see www.wammi.com)
SUS (see http://www.cee.hw.ac.uk/~ph/sus.html)
QUIS (see http://www.cs.umd.edu/hcil/quis/)
USE (see http://www.mindspring.com/~alund/USE/IntroductionToUse.html)
IsoNorm (see http://www.sozialnetz-hessen.de/ergo-online/Software/Isonorm-Workshop.htm
- in German only)
IsoMetrics (see http://people.freenet.de/gediga/bit99.htm)
As alternative to questionnaires, interviews or focus groups
may be used, but the data resulting from such methods is generally
less precise than if a standardised questionnaire is used.
If the software is still under testing, then it may be possible
to do something about features which give rise to negative
satisfaction ratings. Otherwise, the usability satisfaction
report must be an input to planning the next release.
Effective Methodology for the Study of Human-Computer Interaction
(with M Corbett). North Holland/Elsevier, 1990.