Steve Portigal is a true user research pioneer. Over the past 15 years, he’s helped all kinds of clients discover new insights about their customers, and his work has informed the product development of music gear, wine packaging, video conferencing systems, corporate intranets, and much more.
He’s written two books, Interviewing Users and Doorbells,Danger,and Dead Batteries, a collection of user research ‘war stories,’ which are personal accounts of the challenges researchers encounter in the field. He also hosts the podcast Dollars to Donuts, in which he talks to people who lead user research in their organizations.
We caught up with Steve to discuss why, as user researchers, we should embrace the lack of control, how to choose the right participants for user research interviews, and how to critique interview sessions.
Should we focus on sharing successes over career failures and mishaps?
We need to share both! Thinking about the field of user research, it’s important for practitioners to continue our development. Examining what went wrong (or what was different from what we expected) can highlight practices that might have avoided any particular mishap. But user research is so much about people and all their quirks, personalities, strengths, flaws, emotions, and so on — it’s what the work is about! There are inevitably surprises, and failures, and so another way to think about improving our skills is in accepting the lack of control, and even embracing it.
Researchers are often ‘selling’ the benefits of the practice to colleagues and stakeholders, and while I’m probably not going to lead with failure stories, it’s helpful to have a framework for considering them. ‘Failures’ are inevitable and while we work hard to prevent them, they are still coming for us, and reframing them as part of the messy people experience that we’re out there to embrace can help us discuss more realistically with our collaborators. There’s no reason any of us should feel alone with these experiences; as Doorbells, Danger, and Dead Batteries illustrates, they are part of this work.
What common mistakes do people make when interviewing users, and how can you avoid them?
I see people who are less experienced trying too hard to direct their interviewees:
“So, would you say that you live a hectic lifestyle?” “Is your network always on or do you just turn it on when you come in?”
I believe a lot of interviewing is done without any expectation that it’s an opportunity to learn about the process, as well as learning about the content. Teams are under pressure to come to conclusions and make recommendations, and I get that, but taking some time to reflect is how you can get better.
I’m referring to a specific type of reflection, not gazing into the middle distance thoughtfully, but doing some purposeful work. Taking a recording and a transcription of an interview, and going back through it to listen and watch for what works, what went awry, and to articulate what you might do differently the next time. This isn’t about shaming, it’s like an athlete watching game tapes with their coaches. And this can be collaborative; researchers can critique each other’s sessions.
This works better when there are some ideas about best practices (my point above about trying too hard to direct is one, but of course there are many more in Interviewing Users), whether it’s from reading, or from someone in the critique process who has more experience.
How do you choose the right people for interviews?
In planning research it’s important to understand the business questions and the research questions. The naive approach is to assume the people we interview are the people we are designing for, and that’s it. But who are the people who can give you insight into the research questions? Depending on your question, we might consider people that are no longer using your product, or people that are lead users of your product, or people that should be using your product but haven’t adopted it yet, or people that are enthusiastic users of an alternative or competitive product. For ‘using your product’ you can also substitute ‘engaged in the behavior we are interested in.’ You can also look for analogous users (say, looking at a team of window cleaners to gain some perspective on collaboration when your users are workers in high-end commercial kitchens).
In the planning process, you have to identify what criteria you care about, and then figure out how you are going to find them. Some organizations have an internal user research recruiting function. In a large company that’s a request through an intranet, in a smaller company that’s the person on the user research team who handles that. It takes time to put this infrastructure in place, whether it’s building a database of customers or users who have opted in, or coordinating with the sales team to have this integrated into the CRM software. Another option is to go out to a recruiting agency who can find the appropriate people. Sometimes it makes sense to go into a community and post a request (say, an online discussion group where people who are involved in your area of interest are hanging out). Sometimes we use our own social networking connections. It’s great to have a range of approaches, as often a new project will challenge our existing participant recruiting and we need to extend it.
How many participants should you select for your interviews?
I like to think about the sample size in selfish terms — how many can I keep my arms around? Over the course of a study, I like to have a sense of who the people are — their names, some of the stories, and so on. I can’t recall everything or keep it all straight, of course, but there’s a point at which you can get overwhelmed by the data, anecdotes, stories, names… you find yourself saying “wait, who was the one who had the tortoise in her backyard?” Again, this isn’t a binary, I am always forgetting some of the details (that’s why we document the interviews!) but I don’t like to be overwhelmed by it all. If there’s a desire for a larger sample, I push for several phases of work, so we do some interviews, make sense of what we’ve learned, and go back out to a different audience with some hypotheses in place.
Sometimes there’s a desire to plan the sample in terms of coverage, but it can quickly get out of control — “Okay, we’ll need two households with young children and who shop regularly online using our site, and then we’ll need two households with no children who don’t shop online at all but do shop retail regularly and….” I like to think about the crucial criteria (e.g., active shopper online, late model mobile device) and the other factors to as ones to seek a mix of (e.g. age, income, household composition). You may seek coverage in quantitative research where you are chasing statistical significance but it’s not feasible here. In a lot of cases, the people in these different groups are more alike than not. The exception might be when you are looking at different roles (say, buyer and seller, producer and consumer, etc.).
Finally, I find there’s a point in the study (before I’ve done any kind of analysis or synthesis) where I start to see really interesting patterns. None of this is conclusive, but it’s like a level of activation energy that I’m trying to create for myself. I can’t do that after just a few interviews, and I can’t sustain it through a long slog with too many interviews. For me, it happens somewhere at around 8 to 10 interviews.
Some well-known and very successful designers don’t do any user research. What do you think about that approach?
Let’s assume that’s true. It’s foolish to declare that the only way to innovate is through research. Even when we do research, there are many other factors at play that determine success. What does concern me is that approaches championed by ‘well-known and very successful’ individuals won’t succeed for everyone. It’s real swell that Steve Jobs (or substitute your favorite successful innovator) did it this way. But you’re not Steve Jobs!