11
Nov
2009
4

7 aspects of successful usability questionnaires

This week in HCI we’ve been thinking about questionnaires. They can be an important usability tool, although there are also many limitations. Primarily questionnaires are used as a quantitative data collection method (i.e. it will give back a large amount of responses), and so, compared to a qualitative methodology, are useful in pinpointing where problems exist, but less helpful in helping us understand why. As such, it is best to combine both forms of research, perhaps by starting off with questionnaires to identify frequent problem areas, and generalized opinions of systems, before moving into a qualitative method to understand why these areas are problems. An advantage of questionnaires include the fact that they are cheaper and quicker to get results from than many other methods, but this is balanced by some drawbacks – the data you record is more subjectively influenced by the researcher and participants opinions than in other methods, such as direct observation.

Nonetheless it is an important usability tool, and it is important that the responses received from questionnaires are of high quality, and useful. So, I’m going to share some of the areas that I, and other HCCS students, have identified as potential problems when dealing with questionnaires, in order to help you make better questionnaires. And since this is the internet, we’ll be presenting them in the form of a list, as everyone on the internet loves lists!

everyone on the internet also loves pictures of cats

everyone on the internet also loves pictures of cats

So, here are seven important aspects to consider when creating questionnaires.  

1. Answers can only be as good as your questions

When preparing a questionnaire, you need to think at length about the aspect of the subject you want to investigate, and go in knowing what you need to find out. Generalized questions, or being vague on the topic, won’t give useful data, and so it’s important to make sure the questions are actually asking relevant things. For example if you wanted to find out about… the most popular aisles in Sainsbury’s, asking questions about whether people prefer the supermarket to its rivals wouldn’t get closer to this goal. Also we all know it’s the cereal aisle. So, know what you want to find out from the questionnaire.

2. The questions need to cover the areas in depth.

            When getting opinions, it helps to be specific. Don’t just ask ‘did you like this’, but follow it up with either a question asking for reasons why, or (if you’re after a data set that can be analyzed more uniformly), ask them to rate on a number of scales why they did or didn’t like it (i.e. “to what extent did the look of the webpage affect your opinion of it”). Not doing this will lead to closed answers (Did you like this? “no”), when it would be possible to get a much richer set of data from the participant. Whether you select an open question ‘why’ or a closed question (based on scales), depends on whether you are after purely quantitative data, or also want to include qualitative data as well.

3. Changing the questions mid-implementation taints your qualitative data

Halfway through a study, the results may start to show interesting trends that you’d want to find more about. Take caution when altering the questionnaire to investigate these trends. Adding more questions should be fine (except for the tired participants!), but when editing a question that already exists (i.e. from ‘did you like the look and feel of the website’ to ‘did you like the look and feel of the first page of the website’), keep in mind that this will invalidate getting a quantitative response (i.e. ‘85% of people liked the look and feel of the first page of the website’) from the entire dataset for that question, as the participants have been answering different questions.

4. Subjective answers need to be standardized

Remember, when asking whether something was ‘easy’ or ‘hard’, that answers to theses questions are going to be subjective. People are likely to have a wide range of expectations about how a system should be, and a wide range of experience, and so will be judging on separate scales.

Dr Graham McAllister tells a story related to this. When doing usability testing, he asked ‘did anyone have any problems with the program’… no reply. So he asked instead ‘did anyone think that someone else may have problems with this program’, and a whole host of replies were given from the same people.

Don’t forget that pride can be a factor preventing people from saying they found task’s hard. Shifting the focus of the questions from the participant to the medium can help prevent this.

Also, terms such as ‘often’ or ‘rarely’ mean different things to different people. Try and replace them with specific terms ‘every day’, ‘every week’ etc.

5. The questions reflect your opinion

Because of the close controlled environment that a questionnaire creates (i.e. the participants can only answer the questions they have been asked) it is important to make sure that the researchers opinions do not show through the questions. For example, leading questions, which make it easier to answer one way than the other. I saw an advert recently, for some sort of Christian business, that asked ‘Does god exist?’ with tick boxes for ‘Yes’ ‘Probably’ and ‘No’. This is a leading question – the only indefinite reply implies agreement. Where is ‘probably not’, ‘neither agree or disagree’ or ‘don’t know’? (Answer: not on an advert paid for by the church)

6. You need to give people a reason to participate

now thats an incentive

now thats an incentive

Before I go on with this list, I was wondering if you’d be happy to answer 25 questions on your opinions of southern English fauna and shrubbery. Please click here to fill it out.

Did I mention that filling out the survey gets you a £25 amazon voucher? Do you want that link again?

The point, as I’m sure you guessed, was that you need to offer an incentive for people to participate in your questionnaire, otherwise only people really interested in the subject will reply. Suitable incentives would be discounts, free products, a prize draw, or something related to the field you are investigating.

7. The data can be skewed towards extreme opinions

Failing to give a good enough incentive or no incentive at all, will end up with unrepresentative data – only people who feel so strongly about the subject matter to reply will bother to. In practise this will either be people who are really angry about it, or people who love it, and this will skew your data towards the extremes. To ensure you get a natural selection of participants, steps need to be taken, such as pre-selecting participants, or offering incentives as covered above.

So there we have it. Seven tips to help you make effective questionnaires. Enjoy asking people things!

Enjoyed reading this post?
Subscribe to the RSS feed and have all new posts delivered straight to you.
4 Comments:
  1. Martens19 25 Dec, 2009

    I want to quote your post in my blog. It can?
    And you et an account on Twitter?

  2. Steve 31 Dec, 2009

    I think you’ve asked before… yes, but reference me. Also yes to twitter, see link on the side bar! :-)

  3. Thanks for all 7 useful points..need to implement this soon.

  4. Anna Kitowska 25 Mar, 2013

    This is a very good list. First of all, I am a huge advocate of mixing the usability tools – especially survey questions with others (automated testing could be a good example) as it gives so much more insight. I liked you point about incentives and how the survey may quickly shift into extreme opinions – very interesting problem and should be addressed more frequently. Cheers

Post your comment