Some things I’ve learned about observing playtesting sessions

Being the observer is often seen as the less glamorous side of usability testing. While the moderator gets to meet and interact directly with the player, the observer is trapped behind the one-way mirror. However it is actually the more important of the two roles – the observer is the one who is identifying and recording the usability issues that occur during tests.

To be a good observer, you need to know the game’s code inside out, and be able to identify issues (as the moderator may not be in contact to highlight them for you). You also need to handle this while looking after the development team, who are likely to be watching the test next to you.

These are some techniques I’ve picked up from other researchers at PlayStation, which I feel have improved my own skills as an observer of user tests. This post follows on from my previous post about moderating one-to-one user test sessions, and aims to be tips that may not seem obvious at first, but greatly improved my own note-taking once I started implementing them.

Timestamping Notes

Assuming that you don’t have a fancy video system that automatically associates your notes with the video, it comes down to the observer to keep track of when issues occurred. Mirweis Sangin has been a great proponent of doing this on our team, and the benefits have been huge.

How to time stamp notes is simple:

Start a timer as soon as recording starts at the beginning of the session. Then, when issues occur, add to your note the time that it occurred, from the timer. It doesn’t matter if your time isn’t exact, or you don’t think of it until 20 seconds later, just write down what you have.

It’s an easy thing to do, but it will save you a huge amount of time later during the analysis phase when you need to review occurrences of the usability issues. The time stamps make finding the issue again much easier on the video than scrubbing through hours of footage, and so it becomes simple to find screenshots or videos for the report or a highlight reel.

It can also help re-cap issues when the notes are unclear, or if you need more information to explain what happened (for example, you’d missed what the outcome of the issue had been). And on that subject…

 

Always write what happened at the end

Our team uses an adaption of userfocus’ method for prioritising usability issues (http://www.userfocus.co.uk/articles/prioritise.html ). In order to prioritise reliably and consistently against this model, we have to capture what the outcome of every issue is.

If you had just written an issue like “user had difficulty finding the next objective”, this would be useless to you when looking to determine the impact a few days later. Did they find it? How long did it take? Did they backtrack? Did they need help? Where did they look to see the next objective? Why?

With all issues, remember to capture the outcome, such as “how long did it take to resolve”, “did the moderator have to help them?”, “what prompted them towards overcoming the issue”. This sort of detail is essential for the team to be able to make the correct fix, and recording this information now will save you a lot of time, help you prioritise the issues later, and ensure that you don’t have to spend days re-watching the videos!

 

Ignore (some) opinions

Most qualitative studies do not have enough users to find reliable opinion data. Instead questions about what people thought of the game should be saved for a quantitative study.

When a user does express an opinion during a one-to-one study, it should be captured it if it’s a result of a usability issue (e.g. “I’m annoyed because I can’t find where to go”). However if it’s an opinion about a subjective thing (“I don’t like how it looks”), the observer can ignore it.

When opinions do arise, a good moderator should reply to opinions with a probing statement like “why do you feel that?” to identify if a usability issue is causing the opinion, which will help increase the quality of the notes taken.

Opinions can be great for a route into usability issues, or be used to help illustrate the effects of usability issues in the final report. However on small sample sizes, they are unreliable when reported directly, as they may create a misleading impression of the game’s state.

 

Aggregate while the test is running

We use a mind mapping software to capture our notes during the test. Not only does this allow issues to be arranged spatially, and navigated quickly, but it allows quick and simple aggregation of issues.

Before the test begins, create logical areas in which the issues will fall in the mindmap. This can be level by level, or a list of the core mechanics, usually this is obvious depending on what type of game it is and the objectives of the test.

The first time an issue occurs during a session, navigate to the relevant section in the mindmap, write the issue and tag it with the user’s number. After the second occurrence of an issue, instead of writing it out again, just tag the original occurrence with this second user’s number, and add any extra details as a sub point.

By aggregating and clustering the notes during the test, the analysis time is greatly reduced. This will mean the report will be ready quicker, and the results can be shared with the team sooner, which is crucial in the fast-moving environment of games development.

 

The observers role during the test of identifying and capturing usability issues is often more difficult than the moderator’s role, as they need to identify and capture the usability issues on the fly. These tips have helped me improve my own skills note-taking during a user test, and I hope they will help you too!

One Comment On “Some things I’ve learned about observing playtesting sessions”

  1. Great article, as usual. I especially like the advice around opinions, which doesn’t get heard enough. One comment/suggestion: your example issue, above, I would say is an observation. “user had difficulty finding the next objective” is the raw data, if you like, from the session. The usability issue is the underlying problem with the interface that caused the participant to have difficulty finding the objective. Sorry to be picky, but I’ve found subtly reinforcing this way of thinking helps our consultants provide more helpful outputs for clients, by focussing on what needs to be addressed in the design, rather than handing over a sheaf of observations that then need further interpretation by the client. Of course, it’s also useful to report your observations as evidence for the issue.

Leave a Reply

Your email address will not be published. Required fields are marked *