As user experience designers, a key component to nearly all the techniques we use in our practice is the one-on-one interview. It’s the basis of requirements gathering, usability testing, and task analysis. In order to remove our personal biases, expectations and opinions from the questions asked, I practice a kind of questioning technique called the nondirected interview.
The questions asked are at the heart of any interview. Following are a loose set of guidelines to help you frame questions in a way that elicits honest and accurate responses.
Focus on experience, not extrapolation.
Avoid questions that force the interviewee to extrapolate an answer beyond their limited understanding of the problem at hand. Here’s an example:
Bad: “Is this a useful feature?”
During an interview, this question gets interpreted as “In the universe of all things, do you think that someone somewhere could find some use for this feature?” People are generous about how they interpret “someone,” “somewhere,” and “some use,” so they will answer this question in the broadest way possible, which is not useful for evaluating the feature.
Better: “Would this feature, as it’s currently presented, be valuable to the way you do your work right now?”
This is what you generally want to ask, but its wording is kind of awkward, so you probably need to get that kind of information by asking several questions.
Best: First ask, “Which aspects of this product would you find valuable in terms of how you work right now?” Then follow up by focusing specifically on the feature you’re interested in: “How would you use [the feature in question]?”
By grounding these questions in terms of the interviewee’s actual experience, the answers will tell you much more clearly whether the person understood what the feature does and whether they actually have a reasonable use for it.
Concentrate on immediate experience.
People’s current behavior predicts their future behavior better than their predictions do.
Bad: “Is this interesting to you?”
As people answer this question they may imagine that at some point in their lives they could find it (whatever it is) interesting and say yes, but that’s almost never useful when trying to figure out how to prioritize features in an interactive experience. You want to know if it’s useful now?
Good: “Is this something that you would use today?”
The question should not imply that you’re expecting a specific answer or that any answer is wrong.
Bad: “Don’t you think that this would be better if it was also available on PDAs and cell phones?”
This implies that you think PDA and cell phone access are a good idea and that you will disapprove if you heard otherwise.
Good: “Is there any other way you’d like to use a feature like this?”
Asking this question and then prompting people to discuss PDAs and cell phones presents the opportunity to think in an unconstrained way before focusing on a specific topic and doesn’t imply a value judgment about their perceptions.
Make questions open-ended.
Given a limited set of options, people will choose one of them, even if their view lies outside the choices presented, or if more than one is acceptable. They’ll pick the one that’s closest to how they feel, but often that’s not how they really feel. Questions should give people the opportunity to express their actual opinions.
Bad: “Which feature from the following list is most important to you?”
This assumes that there are features which are important to them and it assumes that one of them is more important than any other. That’s a lot of assumptions.
Better: “Rate from 1 to 5 how important each of the following features is to you, where 1 is least important and 5 is most important. Put 0 if a feature is completely unimportant. Write down any features we may have missed.”
Still better is asking a question that gives people complete freedom to discuss a topic (within the constraints of a certain subject, of course).
Best: “Does the product do anything that’s particularly useful to you? If so, what is it? What makes it useful?”
The downside is, of course, that the more open the question, the greater the burden on the person analyzing the responses. But it’s better to get a complete, honest response and do some extra work than to get an inaccurate answer to the question.
Avoid binary questions.
Binary questions have the form “yes/no,” “true/false,” or “this/that,” and they force people to make a black and white choice when their attitude may not lie near either extreme.
Bad: “Is this a good idea?”
This misses the subtlety in people’s attitudes. Although it may be nice to get a quick sample of people’s off-the-cuff opinions, it’s much more valuable to know what they find good and bad about the idea, rather than just whether they think the whole thing is good or bad.
Good: “Is there anything you like about this product? What?”
Send Me Your Questions
Nothing guarantees that people are going to give you perfect responses if you follow these guidelines. They’re simply a useful list of things to keep in mind when you’re sitting down to write questions for usability testing, contextual inquiry, or even surveys. I should really make a clever mnemonic for remembering them (if you can think of one, please email it to me.)
I would also appreciate good and bad examples of questions. To start, here’s my favorite leading question, taken from a 19th century survey of factory workers (quoted in Earl Babbie’s Survey Research Methods):
Does your employer or his representative resort to trickery in order to defraud you of a part of your earnings?