VNET5 logo

User-Centred Product Creation in Interactive Electronic Publishing

home   |    events   |    approach   |    support   |    resources   |    VNET5 project   |    sitemap
 
COMMON APPROACH:
 
overview
 
user-centred vision
 
validation planning
 
user requirements
 
design
 
evaluation
 
inspection
 
user testing
 
user satisfaction
 
user acceptance
 
 
success stories
 



Optimized for Microsoft Internet Explorer

Usability Inspection methods

Inspection methods are a set of methods based on having evaluators only examine a software product without involving end users.

Usability inspection methods

The focus of usability inspection methods is on the usability related aspects of the user-interface of interactive products and services.

Usability inspection methods (Heuristic Evaluation being the mostly adopted one) are very efficient with a high benefit-cost ratio. However, most of them have a number of drawbacks:

  • They focus on "surface-oriented" features of the graphical interface. Only few of them address the usability of the application "structure", i.e., on the organisation of both information elements and functionality.
  • They depend on the individual know-how, skills and judgement of inspectors, making inspection a subjective process - a kind of "art". Domain and application specific experience may improve the evaluators' performance. Unfortunately, usability specialists often lack domain expertise, and domain specialists are rarely experienced in usability engineering.

Inspection method for hypermedia applications

To overcome this problem for hypermedia applications, the SUE inspection method introduces the use of evaluation patterns, called Abstract Tasks, for guiding the inspector. Abstract Tasks precisely describe which hypermedia "objects" (i.e., functionality, information structures, or interface elements) to focus upon and which actions to perform on them in order to analyse their usability.

SUE proposes a set of very detailed usability criteria and associate them to the various tasks. These criteria are obtained by refining general usability principles with respect to the specific context of hypermedia applications. Abstract Tasks provide a precise guide about which actions to undertake on which application constituents during evaluations. Usability attributes provide detailed reference criteria against which to judge the inspection findings. As a consequence, inexperienced evaluators, with lack of expertise in usability and/or hypermedia, are able to provide good results. SUE adopts a design model (HDM - Hypermedia Design Model) for describing the application and steering the evaluation process. The inspection process starts with the evaluator describing the application through the primitives and the terminology of the design model. Such a terminology is the same used for formulating both usability criteria on which the evaluation process is based, and the activities defined in the Abstract Tasks. As a natural consequence, evaluators will also use such terms for naming objects and describing critical situations while reporting troubles, so attaining more precision and standardization in documenting the evaluation outcome.

Cognitive Dimensions framework

The Cognitive Dimensions framework has been successfully applied for the evaluation of Visual Programming environments. Thirteen cognitive dimensions describe concepts which offer a means to focus on the information structure of an application, and not simply on screen design and aesthetic aspects. To analyse an application it is fundamental to use a description language, so that the existence of some complex structural problems becomes obvious. The choice of the language depends on the phenomena that must be made obvious. The cognitive dimensions are an attempt to supply such a language.

User Action Framework

The User Action Framework (UAF) has been proposed as a unifying and organising framework, supporting usability inspection, design guidelines, classification and reporting of usability problems. It provides a knowledge base, in which different usability problems are organised according to the "Interaction Cycle model". Following such a model, problems are organised taking into account how user interaction is affected by the application design, at various points where they must accomplish cognitive or physical actions. The knowledge base is a notable contribution of the UAF methodology, which tries to provide a solution to the need for more focused usability inspection methods, and more readable and comparable inspection reports. The classification of design problems and usability concepts allows evaluators to better understand the design problems they encounter during the inspection, and helps them identify precisely which physical or cognitive aspects cause the problems. Evaluators are enabled to propose well focused redesign solutions. Classifying problems, and organising them in a knowledge base, is also a way to keep track of the large number of problems found in different usability studies, so capitalising on past experiences.

Content evaluation

For information intensive interactive products, the approach to inspection can also adopt methods of content analysis and communicability evaluation. The objective of content analysis is twofold

  • inspecting the quality of content allows detecting quality breakdowns in the communication
  • content evaluation methods suggest guidelines for designing usable content.

From a communication perspective, the standpoint of methods for content evaluation is focused on the belief that the "happiness" of a communication act must be assessed by a receiver's point of view. Therefore, especially when dealing with content (i.e. coping with the notion of meaning, sense and relevance), the inspector has to take into account that addressee as the starting point and the target of the whole communication effort.

Content should not be primarily intended in its technical sense (e.g. image size, length of pages, colour of icons), but it should be addressed as a designed set of ideas and messages conveyed through structured interactive possibilities.

Being the core value of the application, inspecting the content implies analysing some important conditions under which the communication between the application and the intended users (or between designers and users) will succeed or fail.

In the field of communication studies, at the cross point between semiotics, linguistics and new media design, some frameworks for evaluating the quality of content have been defined. The conceptual tools for inspecting the content of an interactive application (especially a hypermedia application) are general criteria that guide the inspector during the analysis of the actual content of the application.