|
Evaluating User Satisfaction |
Objectives |
Evaluation of user satisfaction aims to discover what people think and feel about using a product, to assess the perceived quality of use.
It is based on asking people to share their experiences and opinions, usually in a structured way by responding to specific spoken or written questions.
It may involve drawing out insights by facilitating commentary or discussion on the experience of using something.
There are well established techniques for eliciting user views, identifying issues, and measuring user satisfaction. |
How and when to ask the users |
|
Selecting a sample |
The single most important thing about evaluating user satisfaction is that if you ask the wrong people, you get invalid answers. It can be completely misleading to discover that the designer's friends, family and work-colleagues are highly satisfied with a product. Or that the 0.1% of users who filled in a feedback form disliked it.
You must find a representative sample of users, with sufficient sample size and diversity to cover significant minorities. |
Focus groups |
Focus group techniques are powerful for developing concepts and assessing first impressions, early in product development. Group discussion is facilitated around predefined topics.
Focus groups can be used to discover 'gut reactions' to concepts, elicit expected user requirements, uncover prejudices and to draw out insights into what people think of a existing product.
Their disadvantage in evaluating new designs is that they typically involve speculation about the use of future designs, rather than the real experience of trying out prototypes. |
User interviews |
User interviews can explore people's opinions of products, their preferences, experiences, areas of difficulty, patterns of use, reasons for not using, and suggestions for improvement. Hence interviewing is a key technique at all stages of development.
Interview data can be quantitative (counts of responses), or qualitative (insights into issues and motivations).
Interviews are highly effective in evaluating usability when used to debrief users after user testing, to explore the experiences that lay behind what was observed. |
Interview protocols |
It is advisable to work with other stakeholders when designing questions. Create a brief, well-structured list. Use closed questions for quantifiable data, and open questions (to be asked flexibly) to elicit deeper views where required.
It is important to ask the right things at the right moment, and to avoid leading questions. It is essential to pilot the interview questions, and revise them as required. |
Location and technique |
Interviews should take place somewhere convenient for the interviewee, preferably where they have used the product or prototype, to remind them what they have experienced and allow them to demonstrate.
Good engagement and listening skills are required. People often take a while to say what they really think, so the interviewer must not try to fill silences, but should wait for full responses. |
Questionnaires |
Questionnaires can ask much the same things as interviews, but have to get good valid answers without the benefit of an interviewer's skills. Hence the question order, wording and administration instructions are critically important. Many questionnaires fail to get good responses simply because they look too long and seem confusing. Keep them short and well structured, and give simple clear instructions.
To make it possible to analyse responses from multiple users, questionnaires should have sufficient simple closed questions, where users can
- state if they agree / are undecided / disagree
- state a degree of agreement or preference
- choose one or more items from a list.
Subjective, free text answers can give good insights, but are more difficult to analyse and tend to draw fewer responses. Again, it is essential to pilot the questionnaire, and revise as necessary. |
Feedback forms |
Questionnaire principles apply to the design of feedback forms, and the forms must visibly be VERY short and simple. Product feedback forms typically draw very few responses, and these are from a self-selecting minority, so careful interpretation is required, recognising these limitations. |
Psychometric questionnaires |
Psychometric questionnaires measure user satisfaction, with demonstrable validity and reliability. They compare user responses to a tried-and-tested set of questions, against a database of responses to the same questions from many other users of similar products. They require rigorously methodical use and analysis.
Commercial examples include the Software Usability Measurement Inventory (SUMI) and the Website Analysis and Measurement Inventory (WAMMI). |
|