VNET5 logo

User-Centred Product Creation in Interactive Electronic Publishing

home   |    events   |    approach   |    support   |    resources   |    VNET5 project   |    sitemap
user-centred vision
validation planning
user requirements
user testing
user satisfaction
user acceptance

success stories

Optimized for Microsoft Internet Explorer

User Testing

Usability testing is a highly efficient method for measuring the quality of use and acceptance of a product, and detecting issues.

Observing typical present or future users working through a set of tasks identifies weaknesses as well as positive aspects of any device or software application. Valuable insights are collected for a focused and cost effective optimisation of the product under scrutiny.

Several forms of user testing are possible. The most structured and standardized procedure is the Usability lab, which is described in the following paragraphs. Alternatively, it is possible to conduct test in the field or as remote usability testing; these methods are outlined later in this section.

Set-up of a Usability-Lab

In a Usability-lab, subjects are observed while performing predetermined tasks on the interactive system to be tested. The test session is recorded on videotape, in order to facilitate detailed step-by-step analysis. The important components of the Usability-lab are:

  • Test chamber (soundproof) in which the subjects perform the tasks
  • Video cameras which record the subjects during the test session
  • Control room from where the observers keep an eye on the performance and reactions of the subject

Typical set-up of a high-end Usability-lab

The test chamber is equipped with two video cameras and a microphone. A one-way mirror window separates the control room from the test chamber. Thus the subjects can be observed without getting distracted by the observers.


Generally, a Usability-test is divided into three phases: Planning, Testing, Reporting.


  • Recruitment of the subjects for the test and pre-test who fit the typical user profile(s)
  • Definition of realistic scenario (list of tasks) and preparation of the pre- and post-test questionnaires
  • Installation of the soft- or hardware to be examined
  • Pre-test


  • Reception of the subject and introduction to the procedures of the Usability-test. It is essential to convey to the users that nor they are being tested, but the product!
  • Introduction to the product and the scenario
  • Run the test (incl. pre-test questionnaire, tasks according to the scenario, post-test questionnaire)
  • Interview with the subject right after the test (supervisor and subject) and post-test discussion in the control room (all participants) with optional reiteration of single steps and video confrontation.
  • Discussion of the preliminary results (experts only)


  • Identification of inherent problems and inconsistencies according to post-test questionnaires, interviews and expert discussions
  • Pooling of the raw data gathered during the test, compilation of the preliminary results
  • Analysis of the questionnaires and observations; prepare suggestions for optimisation / improvements
  • Conception, drafting and editing of the report
  • Discussion of the results with the client / within the expert group


Supervisor: Typically two persons are minimally needed during the testing. One focuses on the more technical side (positioning of cameras, recording, sound checks etc.), the other is the main observer and takes notes, and also interacts with the subject via microphone if need be (assistance, guidance).


It is essential to select subjects who fit the user profile defined in the project. A minimal number of five users per clearly distinguishable user group is highly recommended.

Additioanl observers

It is strongly suggested that all key areas of a project participate at least partly in the observation. Even if some observation training is needed to accomplish meaningful results, it has been shown to be helpful if many team members know the user reaction to the system from own experience.

Time exposure (lab)

In order to assure smooth procedures during the testing, sufficient time must be allocated per subject. Users should normally not be observed over a period longer than one hour, in order to keep tiring to a minimum (exception: if fatigue is a key area of concern for the product). Depending on the complexity of the test, two to four tests can be scheduled per day in the laboratory.

Alternative testing: Field

User tests can be run at the place of usage ("field"), if a system cannot easily be transferred into a usability lab. This may be because special environmental conditions of the place of usage cannot be reproduced in the lab, or for reasons of security, size, or implementation constraints. The methods used are much the same, with some differences which can influence the results:

  • Due to the lack of a separating wall, the division between facilitator and subject is missing. Therefore it is more likely that subjects are influenced by observers. Specific care has to be taken to avoid any alteration in the users behaviour.
  • Environmental influences cannot be ruled out (however, the reason for conducting a field test is to include this effect). This also means that the situation is actually more natural than in the lab.
  • As sometimes users and not the facilitator define the tasks, more subjects are needed to enable a complete set of results.
  • Great care has to be taken to avoid "leading questions" by the facilitator. The physical presence of the facilitator however allows prompting for "think-aloud" comments.

When conducting field test, it is important to check for permission to set up the equipment, and obtain a written permission to record video images of the users.

Alternative testing: Remote

Testing remotely (e.g. over the internet) is a newer possibility to collect user reaction. It is best suited for projects which involve an international user population, or when feedback from many sources (e.g. in a wide spread project team) is required.

For remote testing, the application has to be developed as capable to run on the internet very early, and put on a server for access by the users. Then either the internet-address is posted to selected users, which are specifically asked for feedback (preferred), or is open to public and feedback is by monitoring activity and by persons who answer the attached questionnaire on their own accord.

This method differs in many points from standard lab testing:

  • There is no control over users. How, when and with which support task are solved is unclear, even if the target user is actually at the system or even near the system.
  • Equally, the environment is not determined, so the test could be run at the intended site or in a completely different location.
  • There is no opportunity to observe what users are doing and how they are reacting.
  • There may be no opportunity to interview the users about their experience.
  • Significantly more users must be tested than in the lab, to compensate for the lack of control and defined task solution.
  • Overall, validity and reliability may be questionable because of the lack of control.