Usability requirements: how to specify, test and report usability

 

 Home

 

 Case study (Greece)
  Robissa Travel/ExcelSoft: Travel Agent Software

 

 Summary

 Suppliers

 Purchasers

 Requirements

 CIF

 Case studies

   • Italy

   • Sweden

   • UK

   • Greece

 Conclusions

 Download

 

 

 

 

 

 

 

Page contents

Overview
Case study
Benefits
Conclusions

Overview

In the context of the PRUE Project, SIEM, in cooperation with a software development company and a representative client (a travel agency), planned and carried out a usability test for a software application (Global Travel) that supports the management tasks of a travel agency. All the steps of the evaluation procedure, as well as the final results were recorded and reported using the CIF format. The procedure that was followed included the following steps:

Step 1. Definition of the product to be tested

Step 2. Definition of the context of use

Step 3. Specification of the usability requirements

Step 4. Specification of the context of the evaluation

Step 5. Design of the evaluation

Step 6. Performance of the user tests and collection of data

Step 7: Report and analysis of the collected data

The benefits for both participating organizations (i.e., the supplier and the consumer) were considerable:

The supplier, through a standardised and valid test procedure obtained a concrete and objective measure of the usability of the tested product in order to demonstrate its quality but also spot potential problems.

Additionally, the evaluation data (usability problems, new user requirements, ideas, etc.) that were collected will be useful as input for the design of a future version of the product.

The consumer was able to judge the usability of the supplier's product, through the evaluation process as well as the extent to which the specific product caters for his/her particular needs.

Another expected benefit was that the consumer's opinion and needs, highlighted during the whole process, will be taken into account for the design of the next version of the product.

Additionally, the process helped the consumer identify his / her real needs, state them to the developers in an organised and understandable way, and produce a document for common reference.

Finally, since, on the one had, the supplier will be able to take into account the consumer's needs by using a well-documented and objective procedure, and, on the other hand, the consumer will have an objective process (the CIF report) for judging the extent to which this was accomplished, it is clear that there is a unique opportunity for the creation of a close and mutually beneficial relationship between them. The result of such a relationship can be a loyal customer for the supplier and software products of higher quality, productivity and usability for the consumer.

In conclusion, the study proved that the CIF format can provide real added-value to a software project, because it is a structured and well-defined process. Both the consumer and the supplier considered it as an efficient, effective and worthwhile activity that had positive results for both. The extra resources (time and effort) required are well justified and spent, and both organizations are positive to using again the CIF format in the future. Of course it has to be noted that in order for the whole process to be resource-effective but also valid and productive, an organization that has high expertise in usability engineering is required, since both the wording and the process of the CIF format require a theoretical background, but also a considerable amount of concrete previous experience, in the field.

Case study

In the context of the PRUE Project, SIEM, in cooperation with a software development company and a representative client (a travel agency), planned and carried out a usability test for a software application (Global Travel) that supports the back office management tasks of a travel agency. All the steps of the evaluation procedure, as well as the final results were recorded and reported using the CIF format. In this page, the procedure that was followed is described.

Step 1: Definition of the product to be tested

After interviews with potential suppliers (software development companies), the final supplier and product to be tested were selected. As mentioned above, the product was Global Travel, a computer application for carrying out management tasks of a travel agency. The version tested was 2.01.

Step 2: Definition of the context of use

The next step was to thoroughly study the product to be tested and, in cooperation with the supplier, define the context of use. This part required specifying which would be the intended user groups, the skills, mental or physical capabilities users should have and the user tasks and goals that could be achieved with the product.

The primary user group of Global Travel is employees of a travel agency. The application requires that the users have some familiarity with the use of personal computers and the Windows operating system and that they have a basic knowledge of a travel agency's tasks.

As already mentioned, the main tasks supported by the tested software application were the management tasks of a travel agency. More specifically, Global Travel is a commercial product, in the Greek language, which supports all different kinds of reservations, such as hotels, tickets, organized tours and transportation. Additionally, it includes several facilities, such as client and supplier profiles, as well as ledger, statistics and other accounting services.

The environment in which the users are expected to use the product is a "typical" office environment, e.g., a travel agency. The product's minimum requirements include a personal computer with a Pentium processor, a keyboard and a mouse, running an MS Windows operating system.

Step 3: Specification of the usability requirements

This part of our study included the definition of relevant usability requirements. These were the following:

Effectiveness

Whether users can complete their tasks correctly and completely.

Efficiency

Whether tasks are completed in an acceptable length of time.

Satisfaction

The user's subjective opinion when using the product.

Step 4: Specification of the context of the evaluation

Having already specified the context of use, it was not very difficult to specify the context of evaluation. The criteria for selecting participants were the following:

Familiarity with the use of personal computers and the MS Windows operating system.

Basic knowledge of a travel agency's tasks. In terms of frequency of use, this could be described as a minimum of one month's experience in performing such tasks.

Knowledge of the Greek language.

The travel agency that supplied the representative users was "ROBISSA Travel". This was because many employees of this agency had only recently started using the Global Travel product. Thus, they had the "ideal" exposure time to the product.

In this step, the task scenarios that participants would perform were also defined. The most important functions of the software application that should be tested were identified through analysis of the product and interviews with the supplier and the potential users. The task scenarios that were selected include:

  • Reserving an airplane ticket for an existing client.
  • Creating a new client profile and making a hotel reservation for this client.
  • Entering payment information, concerning the previously created client.

To ensure that the test conditions would be identical to the "real" context of use the environment selected for carrying out the test was the travel agency's central department.

Step 5: Design of the evaluation

After interviewing several potential participants eight of them were finally selected, seven employees and a manager. A room at the central department of the travel agency was selected for the evaluation. Furthermore, the computer selected was a Pentium (133MHz, 32 MB RAM), with a standard mouse and keyboard and a 15'' color monitor at 800x600 resolution.

The evaluation was designed as follows:

First the participants would have to use the software to perform the task scenarios. Their actions would be recorded by using the MS Office '97 Camcorder application. As an additional means of recording the test and collecting relevant data a digital camera would be used. A number of metrics were selected for analyzing the data collected from this part of the evaluation:

Completion Rate

The percentage of participants who completely and correctly achieve each task goal.

Mean Goal Achievement

The extent to which each task is completely and correctly achieved.

Number of errors

Tasks with components that required more than one attempt form this metric.

Task time

The mean task time required to complete each task.

Completion Rate Efficiency

The quotient of completion rate and mean task time. It specifies the percentage of users who were successful for every unit of time.

Goal Achievement Efficiency

The quotient of mean goal achievement and task time. It specifies the percentage of each task completely and correctly achieved for every unit of time.

For the Mean Goal Achievement metric the best score was defined to be 100%. For each subtask that would not completely or correctly performed, if it was crucial, a 20% would be deducted, else a 10%. The maximum task time was already defined for each task, when the task scenarios were designed.

In addition to the above metrics, in order to measure the participants' satisfaction with the system the IBM Usability Satisfaction Questionnaires (translated in Greek) would be used. More specifically, after the completion of each task scenario, the user would have to fill in an "After-Scenario Questionnaire (ASQ)", which included questions about the user's opinion concerning the ease of task completion, the time to complete the task and the adequacy of support. After completing all the scenarios the user would have to fill in the "Computer System Usability Questionnaire (CSUQ)" to asses the user's opinion concerning characteristics of the overall system such as ease of use, simplicity, effectiveness, information and user interface quality.

Step 6: Performance of the user tests and collection of data

At first the test procedure was introduced and explained to the users, as well as the overall objectives of the evaluation. Furthermore, the users were informed about the fact that recordings would take place and reassured that the test results would be kept confidential.

After the initial introduction, the task list was given to the user and the evaluation procedure started. After performing each task scenario the corresponding ASQ questionnaire was filled in and then the user moved on to the next task scenario. Finally, after completing all the task scenarios, the CSUQ questionnaire was completed. Finally, the participant was debriefed and offered an educational software product of SIEM as a reward for his/her participation.

During the test an observer was always present, so as to register all information needed for the measurements and help the participant feel more comfortable. The observer's role was not to help the participants, since they should only have access to the type of assistance that would be available in real use conditions.

Step 7: Report and analysis of the collected data

The data collected during the test were analyzed and the results were reported in appropriate format (e.g., tables and graphs). The final test results were the following:

  • the mean completion rate for all tasks was 100%;
  • the mean goal achievement for all participants was 94.5%;
  • the mean number of errors was 0.46;
  • the mean task time was 4.6 minutes;
  • the completion rate efficiency was 21.8%/min;
  • the goal achievement efficiency was 20.6%/min.

The overall score of the "Computer System Usability Questionnaire (CSUQ)" was 4.35 (on a scale from 1 to 7, where 1 is the best rating) while the average score of the "After-Scenario Questionnaire (ASQ)" was 3.5 (on a scale from 1 to 7, where 1 is the best rating).

Benefits for the participants

The supplier, through a standardised and valid test procedure obtained a concrete and objective measure of the usability of the tested product in order to demonstrate its quality but also spot potential problems. Additionally, the evaluation data (usability problems, new user requirements, ideas, etc.) that were collected will be useful as input for the design of a future version of the product.

The consumer was able to judge the usability of the supplier's product, through the evaluation process as well as the extent to which the specific product caters for his/her particular needs. Another expected benefit was that the consumer's opinion and needs, highlighted during the whole process, will be taken into account for the design of the next version of the product. Additionally, the process helped the consumer identify his / her real needs, state them to the developers in an organized and understandable way, and produce a document for common reference.

Finally, since, on the one had, the supplier will be able to take into account the consumer's needs by using a well-documented and objective procedure, and, on the other hand, the consumer will have an objective process (the CIF report) for judging the extent to which this was accomplished, it is clear that there is a unique opportunity for the creation of a close and mutually beneficial relationship between them. The result of such a relationship can be a loyal customer for the supplier and software products of higher quality, productivity and usability for the consumer.

Conclusions

This study aimed at exploring the usefulness and value of the CIF report format. Real practice showed that it can provide real added-value, because it is a structured and well-defined process. Both the consumer and the supplier considered it as an efficient, effective and worth performing activity that had positive results for both. The extra resources (time and effort) required are well justified and spent, and both organizations are positive to using again the CIF format in the future. Of course it has to be noted that in order for the whole process to be resource-effective but also valid and productive an organization that has high expertise in usability engineering is required, since both the wording and the process of the CIF format require a theoretical background, but also a considerable amount of concrete previous experience, in the field.

Last updated 12-Mar-02

 

Copyright © 2002 Serco Ltd. Reproduction permitted provided the source is acknowledged.