In part two of Comprehensive Guide to User (UX) Research, we’ll continue our discussion of user experience (UX) research by diving into the specific UX research methods.

Research is a vast topic so consider this a short primer, but — as in my previous article — I’ll provide some tips and techniques on both qualitative and quantitative research (as well as defining what these methods are).

Qualitative research

Qualitative research is primarily exploratory research, undertaken to establish users’ underlying motivations and needs. It’s useful to help you shape your thinking and to establish ideas, which can then be built and tested using quantitative methods.

Broadly speaking, qualitative methods are largely unstructured, tend to be subjective, are at the softer end of science, and are about establishing insights and theories (which we then test, often using quantitative approaches). They tend to be smaller sample sizes and require a degree of hands-on facilitation. With qualitative research, user behaviors and attitudes are gathered directly.

I’ll be exploring three qualitative research methods in this article — interviews, contextual inquiries, and card sorting. User testing is also particularly important. However, I’ll be exploring it in depth in a future article in this series. Until then — if you’re eager to learn a little more — I’d suggest reading Nick Babich’s 10 Simple Tips To Improve User Testing.

Interviews

Interviews are a great way to really get to the heart of your users’ needs. To facilitate interviews effectively requires a degree of empathy, some social skills, and a sense of self-awareness. If you’re not this person, find someone on your team who is.

It’s important to put your interviewees at ease. They need to feel comfortable sharing their thinking with you. If you can build a rapport with them, they’ll often open more honestly with you. One of the biggest benefits of interviews — over surveys, for example — is that you learn not only from your interviewees’ answers but by their body language, too.

Broadly speaking interviews fall into one of two categories:

  • Structured interviews. The interviewer focuses on a series of structured questions, comparing the responses of the interviewee to other interviewees’ responses.
  • Semi-structured interviews. The interviewer adopts a looser, more discussion-driven approach, letting the interview evolve a little more naturally.

In reality, anyone who has conducted interviews will know that that is, by nature, organic. Even if you have a predetermined structure in place, it’s important to allow some breathing room to allow the interview to evolve.

Put some thought into your research questions in advance, but allow the interviewee the latitude to move into areas that you may not have considered upfront. Interviews are a useful way to challenge your assumptions, and interviewees can often lead you to unexpected discoveries and things that you perhaps weren’t aware of.

It’s important to put the interviewee at ease, ensuring they feel comfortable answering your questions. Facilitating an interview effectively isn’t easy, so it’s helpful to use another member of the team as a notetaker, this allows you to focus 100 percent on your interviewee.

Contextual inquiry

Returning to Yogi Berra: “You can observe a lot by just watching.” A contextual inquiry is a form of an ethnographic interview, where users are observed and questioned in their own environment, to try to determine their approach towards specific tasks.

A contextual inquiry is focused around four key principles:

  • Context. Interviews are conducted in the user’s workplace, which affords an opportunity to experience typical working conditions, existing solutions and, equally, a user’s frustrations.
  • Partnership. The researcher and user work together to understand the user’s workflow and the tools they use.
  • Interpretation. By sharing the researcher’s observations and insights with the user, there is an opportunity for the user to clarify or extend the researcher’s findings.
  • Focus. The researcher is able to guide the user’s interactions towards aspects that are relevant to the specific project’s scope.

There are many benefits to adopting this approach because users are observed and questioned in their own workplace. A contextual inquiry affords an opportunity to get a realistic view of users’ needs and — equally — frustrations, in a day-to-day context.

Card sorting

Card sorting is a useful research method to establish information architecture (IA). In short, deciding what goes where and ensuring your information groupings make sense to the widest possible audience. Card sorting is particularly useful if you’re working with a group of stakeholders collectively.

Card sorting involves writing words or phrases onto separate cards — hence the name — then asking your research participants to organize them. Take care to ensure your cards are shuffled so that you don’t bias your users. Ask your users, individually or collectively, to organize them into logical groupings.

Card sorting is relatively cheap. It can also be used as a helpful way to build consensus amongst stakeholders, asking them to — as a team — define groupings. By asking your users to name their groupings, you can discover words or synonyms that can be used for navigation labels.

Although there are online tools that enable you to run card sorting exercises online, observing and listening to your users’ debate groupings can give you valuable insights into how users see logical groupings.

Quantitative research

Quantitative research is primarily undertaken to test your assumptions. It’s useful to help you shape your thinking and to establish ideas, which can then be built and tested using quantitative methods.

Broadly speaking, quantitative methods are largely: structured, tend to be objective, are at the harder — more measurable — end of science, and are about testing theories. They tend to be larger sample sizes and can be run in a more hands-off manner. With quantitative research, user behaviors and attitudes are gathered indirectly.

I’ll be exploring three quantitative research methods in this article — surveys and questionnaires, analytics, and A / B testing.

Surveys and questionnaires

Surveys and questionnaires are a powerful tool for gathering a higher volume of opinions. However, they’re generally run in a more hands-off manner. That doesn’t mean they’re not useful, but if possible focus on interviews first.

Surveys and questionnaires generally lack interaction between the interviewer and the interviewees, often being undertaken remotely. As such, it can be difficult to gain the insights possible when working directly with users and observing them. Often it’s what users do that’s the most interesting discovery, not what they say.

If you’re undertaking a survey, it helps to incentivize it in some way. You need to try and motivate as many users as possible to participate. Also, if it’s possible and you’re with the people you’re surveying, paper-based surveys beat digital surveys every time. People have a tendency to forget to return to digital surveys, bookmarking them in their minds for later. Paper is more immediate and, as such, leads to a greater return rate.

Spend time on your survey questions and distill them down. It’s better to ask fewer questions and increase the chance of returns than to ask endless (often irrelevant) questions and lose participants through boredom.

Lastly, the design of your survey is important and can improve completion rates. Typeform is a lovely tool that uses beautiful design to ensure even surveys are pleasurable.

Analytics

We’re fortunate now to have considerable quantities of data at our fingertips. Tools like Google Analytics — the most widely used web analytics service — let you measure website traffic and generate reports quickly and easily. Analytics, although occasionally a little overwhelming, are great for testing your assumptions.

Drawn from data, analytics can be a persuasive tool when working with business executives who, more often than not, “Want to see things in black and white.” Having easy access to numbers of unique visitors, page views, pages per visit, and other metrics allows you to test your thinking once you’ve implemented your design after your qualitative research phase.

Analytics is a huge topic and one which can be hard to grasp if data and statistics aren’t your strong point. If you’re looking for a concise overview, Neil O’Donoghue’s Advanced User Research Techniques on Medium is a great place to start.

A/B testing

A/B testing is another useful tool to test out multiple ideas to see which ones work best. Essentially a controlled experiment with two variants, A and B, this form of research allows you to test different designs effectiveness against each other.

As the name implies, two versions are compared, which are — more often than not — identical apart from a single variation that might (or might not) affect a user’s behavior. A/B tests can be useful when testing assumptions informed by your qualitative findings.

A/B testing doesn’t just have to focus on visual design — it can focus on language, too. For example, you might want to test a call to action (CTA) button with two variants of copy:

  • “Start your free 30-day trial,” or
  • “Start my free 30-day trial.”

In the above example, in a test run by Unbounce, “Changing the CTA button copy from the second person [your] to the first person [my] resulted in a 90-percent lift in click-through rate.”

A/B testing works well when you have large sample sizes. With a lot of traffic to a website, or a large mailing list, you can be more confident that your findings are backed by a substantial quantity of data.

Tips and techniques

  • As important as the research method you choose is who you test your thinking on. It’s helpful to use a screener to screen potential users before undertaking user research. Usability.gov has some excellent resources for this.
  • When developing interview questions, surveys, and questionnaires, it’s important to consider both qualitative and quantitative questions. Both are important.Qualitative questions are open-ended (“How might you improve this customer journey?”). Quantitative questions, on the other hand, tend to be yes/no (“Do you use this feature?”).
  • When working with groups of research participants, it’s important to be wary of the herd mentality. An opinionated focus group participant can influence a focus group if you’re not careful.Build in systems that mitigate against this to ensure everyone has a voice. One tool that’s useful to level the playing field is the K-J Technique, which helps participants reach an objective group consensus.

When choosing your research methods, it’s important to use qualitative and quantitative methods hand-in-hand. oth have their place. Qualitative methods lead to insights, quantitative methods allow you to test those insights.

The tool you choose will be informed by what you are trying to achieve, but — above all — ensure you’re undertaking some user research to kick off the design process so that you’re starting from an informed position.

Analyzing research findings

Finally, it’s important to put all this research to good use! There’s not much point in doing user research if we don’t undertake some good, old-fashioned analysis.

With a number of research methods used, it’s important to triangulate your findings, looking for correlations and patterns. Your aim is to see if any findings arise that are confirmed by your different research methods so that you can implement these findings.

Triangulation is the process of using multiple research points from multiple methods to increase your confidence in your research and assumptions. The more data points we use, the more confident we can be in our assumptions.

 Graphic showing Venn diagram of usability test, survey and analytics
By looking for the points of overlap in our different user research methods, we can be more assured that our findings are accurate.

The more data points we use, the more confident we can be in our assumptions. This is why it’s essential to:

  1. Run different kinds of user tests and to test different assumptions.
  2. Run these with multiple users.

Different research methods have different strengths, lending themselves to different scenarios. Different users respond in different ways, offering different opinions. Ideally, you need a healthy mix of different research methods and different test subjects to cover all the bases.

In short, your research findings are just the beginning of the story. With these findings in hand, it’s important to triangulate the data and see what patterns emerge. With these patterns defined, you can embark upon your design and prototyping with better-informed assumptions. Win!

Conclusion

Design research is by no means a new phenomenon. As our discipline has matured, however, we’ve seen the importance of design research and, particularly, user-centered research grow in importance. The history of design research stretches back to the late twentieth century when it was formalized.

Bruce Archer (1922-2005), a professor of design research at the Royal College of Art in London, was a pioneer who championed research in design and helped to establish design as an academic discipline. As he summarized it succinctly:

Design research is systematic enquiry.

Bruce Archer, Royal College of Art, London

Archer trained a generation of design researchers at the Royal College. By stressing the need for well-founded evidence and systematic analysis, he helped to map over the principles of scholarly research (largely drawn from dusty the world of academia), applying them to the field of design.

Archer stressed the importance of method and rigor, for findings to be documented so that they could, if necessary, be defended. This approach might sound commonplace to us today, but Archer’s ideas were, in their time, radical and controversial, not least within an art school.

Archer’s work was essential. It established the need to approach design in a systematic manner, informed by the needs of users, identifying these needs through systematic enquiry.

This understanding — that design should be informed by user needs and that user research is a route towards understanding — has changed user experience thinking for the better. Our goal, above all, is to inform the design process from the perspective of our users, not from the perspective of our assumptions.

Seeing things through our users’ eyes is the surest path to delivering a better and more memorable experience, and user research is how we find that path.