In the run up to Margaret Thatchers’ election victory in 1979, a poll was taken to estimate who would vote for her. Only 1 in 100 said yes. However, as revealed by the final results, 1 in 3 actually voted for her. The poll was inaccurate, and inappropriate for the task.
Surveys are a common tool used to evaluate a participant’s opinions of the user experience, and usability of a system. I’ve written about how to make good questionnaires before, and have often seen them used as a tool when analysing a large group of participants. However, as a method of understanding users, they are imperfect, and not just because they are poorly designed – instead it’s a fundamental problem with surveys. Let’s look at why this is the case, and why people are tempted to use surveys despite this.
Where are surveys used?
When I’ve been involved with user tests for games, I’ve often seen surveys used as a way of recording the player’s experience. For example, after completing a level, or game mode, they would be asked to rate their experience on a Likert scale (1-10), on categories such as how difficult they found the level, how fun it was, how it compared to other levels. This is often complemented by text notes, where the participant can write in things they particularly liked or disliked.
Outside of gaming, surveys can often be found on the internet – such as website’s satisfaction surveys, or on professional survey sites, like Survey Monkey.
Why are surveys used?
It’s easy to understand why surveys are often used when testing user experience. Most obvious is that they are easy to quantify, since the scores are given as a numeric value, which can then be averaged, and given an overall ‘score’. This can then be stuck on a graph, to impress people too busy and important to be involved with the testing itself. Compared to moderated testing, simple analysis is easy, and ‘results’ can be gained with little effort – particularly if an online survey tool is used.
Similarly, with surveys it’s easy to get a large number of opinions quickly, and in a largely un-moderated setting. Hence, 10 (or 10,000) people can test a game at the same time, with only light moderation, and fill out a survey after to record their views. Surveys also don’t require a large degree of specialist equipment – just a printer, and a pen (or they can be done online). This makes them cheaper than many moderated settings, which require a lab decked out with recording equipment.
Problem with surveys
Surveys sound great, don’t they. Cheap, Easy, and give some hard numbers. However, there are a number of problems with surveys, and one key issue that prevent them being suitable for user experience analysis.
First of all, it’s easy for the data from surveys to be misrepresented (either unintentionally or to further a top secret agenda!). Without hard evidence, such as watching (and recording) an individual player of the game, the analysis becomes reduced to which level ‘scores better’, regardless of the intricacies of the play test. Minor issues become lost within the overarching ‘score’.
Much more importantly, the fundamental problem with attempting to understand user experience with a survey is that they log opinions, and not behaviour. People are (sometimes?) stupid, and don’t know what they think. So a player who has had a positive experience throughout a level, and got stuck near the end, will often be left thinking poorly of the entire level. And without an independent observer to monitor, their in-game opinions are lost, or forgotten. Just like I cannot tell how bad my singing is, a player is too close to the subject matter to gain a full understanding of it.
Essentially, surveys introduce a layer of abstraction from the game that is difficult for a player to follow. It is difficult for them to recognise what parts of a game made it fun, and which parts frustrated them, and it often takes someone else to spot these patterns.
Pride, and psychology can also be a contributing factor – players who have needed 10 attempts to complete a section will still say it was “easy” after finally completing it – psychologically they will often believe it as well, since they have felt the satisfaction of completing the task. Other times they will be too proud to say the section was too difficult, and lie. Again, this rich data is lost through a survey.
What should be used instead?
To gain a truer understanding of the user experience (or player experience) of participants when testing a system, or a game, surveys are therefore inadequate. Instead, a moderated task based analysis session, which is recorded for later analysis, will give a truer understanding of how the participant found the system, and their true experience, unaltered by their own perceptions. I’ve written about recording these sessions before, and will discussed them further in the future.
As we have seen, surveys are cheap and easy, and hence should not be disregarded entirely. However they should not be used exclusively, as they can miss key user experience findings, and require users to know themselves, and their feelings, extensively.