Illustration by Prabhat Mahapatra

Co-written by Rebecca Gordon.

On the day that Illustrator on the iPad launched, it immediately hit #1 on the U.S. and Japanese App Stores, had a 4.6 star rating, and was well-reviewed by many widestream publications. The initial success of this app was by design, not by chance. A useful Beta program enabled the team to design and implement successfully, and perhaps most importantly, our users felt invested, included, and involved in creating the future of the Illustrator ecosystem. In the words of one Beta participant, “I’ve participated in quite a few beta programs and the Slack channel community here is awesome, I’m so glad to be a part of the community…Love pushing Illustrator for iPad to its limits, making the experience intuitive. Thanks so much!”

Beta programs are used to gather feedback on products or services during the development cycle in order to make improvements before launching. They come in once the product is developed enough for target users to complete one or more key workflows from end to end, in order to gather some understanding of what the real world experience will be. In the software development world, Beta testing is common, but varies greatly in terms of focus and form. Over the years at Adobe, we’ve been working through trial and error to develop a set of beta best practices. We believe we’ve hit upon a successful set of 10 key themes that can be a powerful tool for helping any team launch a successful product. Below, we’ll dive into those topics and provide some more information about how to implement them in your own beta program.

Dave, a member of the Illustrator on the iPad Beta program, shares his design process.

10 steps for a useful beta

1. Pick a goal.

This may seem obvious, but as a team, you need to be aligned on a goal for your beta program. This goal will provide a clearly-defined priority when making smaller-scale decisions, such as who to invite to the program or what type of virtual events to host.

If the primary goal that your team aligns upon is marketing and engagement, it will be harder to glean feedback to refine the experience design and product development strategy. Beta programs that prioritize marketing, for example, may over-index on participants that are social media influencers, hoping that those influencers will promote the product to their followers. In that case, the relationship between the product team and the beta cohort will likely be more biased towards pleasing the users than getting actionable product feedback. With influencers in the program, the team will be less likely to investigate challenge areas, share the areas they’re looking for feedback on, or welcome negative feedback. Similarly, if the beta is focused on tutorials that highlight how amazing the technology is, the users will be deterred from providing raw, unbiased feedback – which would be more representative of how real-world users will respond when the app is launched.

If your team’s goal is to glean actionable feedback to help refine the users’ experience, then this article is for you! Harvesting useful product feedback will benefit the whole team; designers will get helpful feedback on the app’s UX, engineers will find bugs that help them strengthen their code, marketing will develop an invested community, and product managers can adjust the feature set and future roadmap according to users’ needs.

Edinah Chewe, a member of the Illustrator on the iPad Beta program, shares her creative process.

2. Create a community.

Users want to feel listened to. Foster a community and keep it engaged, encouraged, and welcomed. Ensure that each person in the community feels heard by fielding issues, questions, and logistical requests to the right cross-functional team member so users can hear from the correct “expert” on any given topic. If users stop getting responses to their thoughtful questions or helpful feedback, they’ll stop communicating. Encourage engagement by setting up fun Easter eggs like surprise giveaways, engaging design challenges, and helpful workshops. From a logistics perspective, make sure the entire cross-functional team has some responsibility in answering questions and following up on user feedback. In this way, the burden of engagement will not fall on a single team member, and the team’s ability to engage can be maximized.

As far as the community platform, we’ve been using Slack. Online forums specific to your company are less likely to garner constant interaction and engagement– simply because the users won’t be used to communicating in that context. What’s most important is meeting users where they’re at: we found that many of our users were using Slack to communicate with clients and collaborators on a daily basis, so that integration worked for us. By all means, find the places where your users already congregate and investigate those platforms as potential community hosts. It is also important to be confident that the platform offers security and privacy for your users.

No matter your platform, it’s important to host separate communities if you will have two or more different user types in your Beta. This will make it easier to contextualize the findings, and it will be a better experience for your Beta participants. We learned this the hard way: in one Beta, we had included both students and professionals. We quickly realized that the students were intimidated by the professionals, and they felt excluded from the conversation. In Slack, this meant that we needed to develop separate identical workspaces (not just channels).

Illustrations from Illustrator on the iPad Beta program participants, featured on Behance.
Pieces from beta program participants, featured on Behance.

3. Be user-centered.

Ideally, the whole cross-functional team will be talking to users. Make sure all are prepared! There are certain best practices to be aware of when speaking directly with users (we won’t cover all of them here). For one, users will have a bias towards pleasing you as the moderator. Following the guidelines in this article will help reduce that bias and give space for the participant to express themselves and provide crucial information.

Remember that it is alright for participants to struggle a bit (which gives us insight into what we need to fix), and for us to not provide the answer – instead, all team members should be listening for the “why.” For example, if a user says, “Hey, I want x feature,” don’t respond by saying, “That’s not on our roadmap.” Instead, ask why they want that feature – what would they use it to do? When would they use it? Where would they expect to find it? This can be much more insightful – it will give the design team a better sense of expected entry points; it will provide clarity to Product Management about what features certain workflows require; and it might even show that the user’s problem could be addressed by a different feature that accomplishes the same goal.

Illustrations made in Illustrator on the iPad by Edinburgh-based participant Monika Jurczyk.

4. Carefully screen the participants.

The big question that arises when a team sets out to start a beta is always: who will be invited to participate? It’s important to clearly define the criteria of whom you would like to include in the beta. These users will shape the future of the application, so it’s important to ensure that they reflect the attributes of the target user. This can include variables like profession, tool use, workflow, needs, and goals. Work with a researcher to devise questions for a screener survey that cover all of the criteria that you decide on, and only invite users that match those criteria according to their completed screener survey.

While you’re at it, the screener survey is a terrific opportunity to collect other helpful data, so don’t limit yourself to asking only about the aforementioned user criteria. While the main priority of the screener is to select users to participate (or not) in the beta program, it can also be useful for other reasons. For instance, a screener can be used as a survey of the population that’s interested in your product, providing insights into which user populations might be most intrigued by the product as it’s been marketed. Additionally, the screener is a great place to harvest additional insights on those who do end up in the beta program. This user information can later be triangulated with the feedback that the user provides; for example, information about what device a user is running, how experienced they are with similar products, and what other technology they use can inform the interpretation of their qualitative feedback.

As you’re building your beta audience, be sure to give thought to equity and inclusion. After all, we live in a pluralistic – rather than a monolithic – society, and your product’s ultimate users will be reflective of that. Be sure to create an inclusive space for people of varying backgrounds, sexual orientations, genders, body types, and physical, mental, and sensory abilities. Gathering diverse perspectives can help the team better understand and empathize with issues faced by a range of users, enabling them to solve such issues ahead of launching the product. If you’d like more tips, you can check out the Adobe Design Inclusive Workshop, or other Adobe Design inclusive resources. For more details on best practices for recruiting, communicating with, accommodating, and engaging users with a range of abilities, our team will soon be sharing an Inclusive Research Best Practices guide.

Illustrations from Illustrator on the iPad Beta program participant, Sophia Yeshi.
Pieces created by Sophia Yeshi, a member of the Illustrator on the iPad Beta program, in the app.

5. Rigorously track and triage feedback.

All of the useful product feedback we set out to harvest will be lost, should the team neglect to track and triage it. For these purposes, we chose to use Reacji Channeler, a Slack Plug-In that enables the efficient tagging, sorting, and routing of individual messages by simply “reacting” to them with a predefined emoji. In this way, we could “react” to bug-related feedback with a ladybug emoji and know it would be sent to engineering, while we could “react” to usability-related feedback with a clipboard and know it would be sent to design. The whole team was involved in “reacting”– it was a low-lift way to make sure the Beta feedback was being sent to the right place, without having to dedicate an entire afternoon to answering users’ questions. After all, the team is busy building a new product!

We also used Instabug, an in-app service that allows users to quickly and easily share bugs, logs, and crashes. Users have the option to report areas of improvement directly. In general, Instabug feedback would be sent to engineers or QE folks, while usability requests or feature requests would be sent to Research to analyze.

6. Actionably analyze the feedback.

Following the sorting of inputs using the Reacji Channeler, the feedback can be shuttled into a database software (we used Airtable), where it can be sorted, tracked, organized, and analyzed by a researcher. The entire team should have access to this data so that they can reference it if they’d like to harvest the details. After analyzing the feedback in the Airtable, the researcher shares the findings with all cross-functional teams, who can then implement the feedback. For us, that took the form of a weekly segment at our All Hands meeting, where the researcher shared the top few insights coming out of the beta: feature requests that were coming up time and again or repeated usability issues. (It’s important to note that this was done for multiple countries, per the international point above.

For instance, the top U.S.-based feedback would be shared at this weekly meeting, but so would Japanese feedback. This was very helpful for getting the whole team onboard with culturalization. Each finding was paired with actionable recommendations and next steps. Depending on those steps, some of these recommendations were input into Jira, which ensured accountability.

7. Internationalize all processes.

On that note of inclusivity, it is important to consider the full range of your potential audience. For that reason, all practices should be operationalized internationally as well. Internationalization is not just about localization. As our colleagues Wilson Chan and Mika Nakamura point out, international research extends beyond language to include cultural colors, symbols, aesthetics, device usage, connectivity levels, technical standards, workplace processes, purchasing styles, learning styles, and even legal considerations. For all of the best practices we outline in this article, we recommend operationalizing them in other countries (and languages/cultures) as well.

Cute, color-blocked illustration of a bear with a crown on and flower in hand by Shunsuke Satake.Cute, color-blocked illustration of a boy with a hat on by Shunsuke Satake.

Image created by a Japanese Beta participant, Shunsuke Satake, in Illustrator on the iPad.

8. Find your village.

It takes a village to raise a beta! A successful beta requires dedicated cross-functional resources. Depending on your team structure, that may include engineering, product management, design, research, QE/QA, content strategy, and community management. We’ve found that a beta works best if someone is dedicated to it as their main focus – that’s right, running a Beta can be a full-time job. We strongly recommend having a UX researcher involved in the process from start to finish. While they likely won’t run the day-to-day logistics of the beta, the researcher can set up the program to get the best data possible. This means gathering data that is actionable and predictive. The UX researcher can help with activities such as user-facing feedback sessions, surveys, and data analysis. It’s also important to point out that beta users are often fans of your product (more on that later). A researcher will be aware of this positive bias and account for it in their analysis.

9. Evaluate longitudinally.

One of the strongest tools in a researcher’s toolkit, a longitudinal study allows the team to focus on full workflows in the app over a significant period of time (e.g., a month) in order to better understand the user experience over time. For our longitudinal studies, we start out with a 30 minute pre-study ethnographic interview with each participant. Then, there are weekly tasks requiring participants to go through key workflows and identified risk areas. Participants are required to provide structured written feedback at the end of each week. We close with a 30 minute post-study debrief and feedback interview with each participant. Throughout the entire process, participants have an open channel with the moderator, and participants are encouraged to contact the moderator at any time with frustrations, bugs, problems, or questions. Again, it’s important to include people with disabilities and people with international perspectives.

The results of a longitudinal study can surface what helps or impedes user success, and they can also provide a way to measure proficiency improvements over time. For example, one of the key metrics we look for is a user’s perceived competency increasing over time; at the same time, their time-to-complete (or the length of time it takes them to complete a task) should be decreasing with increased proficiency. Basically, you’re aiming for something that looks like this:

In a longitudinal study for a beta program, the desired outcome is for the understanding of the product to increase over time, while the time required to complete a task with the  product decreases over time.
Graphic by Damon Nelson.

10. Benchmark the results.

Benchmark testing offers consistent metrics upon which to measure the users’ perception of the application over time. In particular, benchmarks provide baseline metrics to monitor and track over time as new releases go out, thereby measuring the impact of bug fixes, performance updates, and new features that accompany each release. They inform the team as the app continues to evolve by identifying any barriers to success and satisfaction that might be addressed (mental model mismatches, unclear copy, quality of onboarding, etc.). They also allow a researcher to gather deeper insights into the user’s experience. For this purpose, we use in-app surveys that are pushed at every substantial update.

Choose the metrics that you are most interested in measuring over time – some ideas: ease of use, performance, satisfaction – and ask them as survey questions using Likert scale response options. A great metric for betas is the users’ “intent to integrate” the product. That is, would they actually use this app if it was launched tomorrow? We also recommend pairing each quantitative question with a qualitative open-end question; that is, ask “Why?” for each Likert scale question. Numbers alone don’t tell the full story!

This video animates the design processes of many of our beta participants.

Though a Beta program requires an up-front investment of time and resources, it is substantially easier and less expensive than trying to retrofit a mis-designed app after it has been launched. The rich information gathered through our beta program enabled the Illustrator on the iPad team to make critical pivots during development, gauge product readiness, refine marketing materials, and release a product that was met with success and user appreciation.