Illustration by Tridib Das

What people do, and what people say they do are entirely different things. And your consumers are people. To know what your consumers are really doing, or what they really want to do but aren’t, usability testing comes into picture. Usability testing is the practice of testing how easy a design is to use with a group of representative users. What you meant for your users to do vs what they are actually doing is the difference between your outlook and actual business.

If you belong to the business side of the organization, you must know the extent of damages that a 1% dip in conversions can have on your revenue. Competitors gain that business which you just lost, and you lose market share as well as repeat sales. But do you know that a user experience (UX) study can help you increase this conversion rate, especially during new customer acquisition efforts?

UX research has been one of the rising topics across industries. 54% of consumers say customer experience at most companies needs improvement. A good start to a product lifecycle can mean higher customer retention, higher order value, and loyalty. UX plays an important role

As we progress from a digital immigrant economy to a digital native ecosphere, the space is characterized by shorter attention spans, multiple screen browsing at the same time, and lower patience thresholds. This, for a business, translates to solving needs of the customers really quickly, or risk losing them. And UX research helps you gain this advantage. This should become part of your business by design.

So, why should you deploy UX research?

A.     The markets have shown to disproportionately reward the best of designs. McKinsey’s study reported that top quartile companies with the best designs and user experience recorded 10% annual growth, compared to 3-6% of the industry. And this was true in all the industries considered for the study. Also, top quartile companies had an unfair edge, as the difference between the 4th, 3rd, and 2nd quartiles were marginal.

B.     Your internal employees are too familiar and close with the product. They do not have a fresh customer’s eye, an outside-in view, and a user’s perspective, and hence cannot qualify to test what they’ve built. This happens because the objectives of a customer using the product, and an employee working on the product is different. Customers are looking to aid their aim of research and buying. Internal employees are eyeing for better features, competing with industry players, and imitating the best experience. These two objectives, in most cases, are not the same.

C.     Your intuition-based UI may conflict with customers’ conditioned expectations. The truth is, a web experience provided by the top players in the world (the likes of Amazon) have conditioned consumers to use certain UI elements in certain ways. Even if the argument remains, that an experience could’ve been better designed, consumers are looking for a certain information only in that particular way. The way in which iPhone users navigate one page back, is different from android users. Or, a ‘Save’ button still has a floppy disk icon, even though that’s a two decade old design. So logically, designing top notch experiences should mean getting the latest features for your consumers, but intuitively, consumers expect something that they’ve already seen and used, and this precisely may result in friction.

D.    There’s no substitute to good usability research. It is understandable that there could be trust issues about the way a test is rendered. There are so many nuances in running a test that it can never be perfect. For example, while doing a benchmarking exercise, to run the test within subjects, or between subjects is one question. And both have their advantages and limitations. In order to overcome this lack of trust in one methodology, a method of triangulation is advised. Using multiple data sources to validate and compile the findings helps get a holistic and trustworthy picture. If quantitative analyses indicate low subscription rates, you may use qualitative study to find the friction points, and top that up with an expert analyses for benchmarking with competitors for recommendations. But, not understanding your UI should not be an option.

How to start with UX research

You could start-up your UX team with one researcher and pilot the effectiveness of this method. This person will be responsible for not only running the tests, but developing relevant testing methodologies, templatizing test formats for efficient turnaround times, and then running tests starting with small impact areas initially – say understanding a part of your webpage/web-journey resulting in high exit rates. A fairly small budget can kick-start your UX research journey here. There are tools available online, such as those for remote testing, which help you drive this agenda without it being heavy on your pocket. Once you progress and get proof of concept, you can expand the team and objectives, and then merge them with your business KPIs. 

The team can expand to have an additional member – a UX designer aligned to this UX team. You would also want to closely collaborate with the analytics and data teams to get a head start on problems that need prioritization and solving. Analytics will identify ‘what’ the problem is. UX’s job will be to find out ‘why’ and recommend improvements. As your UX team grows, find UX allies throughout the business units. These will be people who are sympathetic to customers and their problems. They will be key to driving UX culture in the organization.

If you are already convinced that you cannot live without UX, and belong to a fast growing digital business, you can also expand the scope of how things are done by including more members on the team and/or hiring an agency to do your studies. This ensures a holistic and a dedicated approach and evangelizes UX as a part of your business objectives by design.

The idea is to start right now, no matter how small.

How do you measure the value from UX research?

Setting up a good benchmarking practice is key. This is mostly going to be an iterative strategy. First step is to define what to measure, and how. What metrics are going to be evergreen and representative? Then record these metrics over time, and after every enhancement in your product. Based on these measurement changes, you will be able to make more informed decisions.

Setting realistic KPIs for every activity is crucial. KPIs can range from ‘increase in users going through a certain step in the funnel’ to ‘decrease in call centre load’

Non-revenue impacting example 

Let’s assume that you ran a usability test and the results have shown that the ‘help’ section is sub-optimal. You’ve also been able to find those certain ‘help’ keywords through web analytics tools which do not return any relevant results on your website. This experience leads to customer frustration and hence people contact the phone numbers mentioned on the website. This shoots up the call centre volumes by x%. When you solve this problem and deploy a fix where customers can help themselves with some of these queries on the website, your benefit is going to be the call centre cost saving.

Revenue impacting example

Assume you run a health insurance business. The website has friction points that were discovered by web analytics (low registration completion rates), and you found the reasons for these drop-offs through usability testing (along with low ease of use rating). If you then mend this journey, and when it increases the registration completion rate, this impacts your overall conversion rate thereby impacting a revenue stream.

Remember…

Exceptional customer experiences are the only sustainable platform for competitive differentiation. Once you realize the initial benefits of usability testing and research, there would be no looking back. The method, however, will have to keep evolving suiting your business to draw maximum rewards. Remember, if you don’t take care of your customers, your competitors will.

“Don’t listen to what people say; watch what they do.” – Steven D. Levitt, Think Like a Freak