User interfaces (UI) have changed dramatically over the last few years. As we access the web on a myriad of devices, they need to adapt to our needs. UIs need to take into account slow connections, older browsers, and the wildly varying specs of the devices we use. Some of them don’t even have a screen.
Screenless — or headless — interfaces are becoming increasingly popular. The most common examples are Apple’s Siri, Windows’ Cortana, Google Assistant, and, of course, Amazon’s Alexa and its Echo device. We interact with them using our voice, but, as a lot of the principles we’ve become accustomed to from working on graphical user interfaces don’t apply, how do we design for them?
We asked four experts what they say we need to consider when designing headless user experiences and how to prepare for a future without pixels.
Transcend device boundaries
Headless user experiences are certainly a key step in the evolution of more humanistic design, but humans are inherently adaptive, and we expect our devices to be adaptive, too. Today’s headless user experiences fall short on adaptivity, as they often operate in a bubble. What if I want to switch to a computer or phone? Can I start an experience on one device and continue it on another?
To design the next generation of headless experiences, we need to think beyond a single device. How can we enable an interaction to follow a customer across a variety of devices and contexts? The first step is understanding what scenarios and customer states should transcend devices. What “slots” had the customer filled via a spoken exchange before switching devices, and can we share that state via the Cloud? What can we infer by a customer’s change in context? Once you’ve identified these key elements, don’t shy away from getting involved early in product design and architectural discussions to ensure you can share that customer state across devices.
Once we’ve begun tackling those transcendent scenarios, we turn our eyes towards the design process. Successful design for headless or multimodal user experiences is heavily dependent upon context. How are you capturing customer context in your research and deliverables? It’s often not enough to just share scripts or flows. Storyboarding can be critical in communicating the richness of your customer’s lived experience at the moment they speak to your product. High-fidelity prototyping may require video or audio components.
– Cheryl Platz, principal designer at Microsoft and owner of Ideaplatz
Find the thin line between science and magic
We’ve started to treat designing for headless UIs like being an illusionist, in the vein of Houdini or Kellar. I know that sounds odd, but it’s a great mindset. The goal is to maintain the magic and not show our hand. Being clear about the bifurcation between automation and interaction — when it’s automated or templated, we want that to be clear. But, when programmed logic or deep AI is behind the communication, it has to feel natural and conversational. We work to find that thin line between science and magic, where the user feels like they still have control. The biggest hurdle facing headless UIs is the literal act of letting go of controlling the interaction.
So, our headless UI design always starts with content — which is how all design should start, to be honest. Our focus is on getting the tone, consistency, and the big little details right. That means a lot of upfront planning, documents, and building blocks. We take the same approach that we do to build design systems. Instead of setting up a grid, we set up a scaffold of content to make sure that the bones are in place for consistency. What has to remain rigid is sorted out first. Then we move on to what can flex, somewhat, around the rigid parts. Then we build for the exceptions and things that can give and bend. Going back through it all in testing, we act like detectives, trying to find the inconsistencies and the gaps. We’re working hard to stay out of the uncanny valley and, again, either give a clear appearance of automation or a magic experience that feels human.
Design systems are far more important for headless UIs, since breaks in voice and tone are more noticeable than visual breaks. That may seem odd, but it’s true. Most of us are professional enough to keep things like buttons, borders, and fonts consistent, so obvious gaps in visual tone aren’t apparent. But the very minor things stand out much more when it comes to headless, from the level of familiarity or colloquialism in the prompts to how verbose or succinct the UI works can be enough to give away the trick. And that’s when it frustrates the user. Then, they want to skip the headless UI and take control through a graphical user interface (GUI) or talking directly to a human. Our goal is to build workers that can handle it all, with pleasure, and let the user feel just as in control as when they have access to a GUI or human intervention.
– Brad Weaver, partner at The Banner Years
Use markup to make sense of the content
Our first priority with any interface, headless or not, should be the content. It’s important to have clear, well-written copy. Does it speak to the audience using the words they do? Are the instructions and labels clear? Are the sentences easy to follow?
Once the copy is in good shape, I don my markup hat. I’ve got a lot of HTML elements at my disposal and I use them to further clarify the meaning of the content. Articles, paragraphs, lists, emphasis, abbreviations, all of these elements illuminate the content. They give structure to our documents and make them easier to read.
Once my first markup pass is done, I loop back to see if there are instances where the prose makes assumptions about how a user is interacting with the site. For instance, a link to “read more” can make total sense when it’s preceded by an article teaser. If a user is navigating the document via links, however (which happens often when they are using assistive technologies like voice-based interfaces) hitting one — or worse, several — links that offer little context as to their purpose can be incredibly frustrating. In cases like this, inserting visibly hidden text can keep the visual experience light while providing more context about the link to folks who may not have it. I might choose to label the link “Read more about designing for headless interfaces.” I could do that via the “aria-label” attribute or I might use CSS to hide — accessibly, of course — the portion of the link text I don’t want shown.
It pays to look to how HTML (and CSS and JavaScript) can reduce barriers for non-sighted users. After all, in a headless context, we’re all non-sighted.
– Aaron Gustafson, web standards and accessibility advocate, Microsoft
Start with user research, prototype, and test with real people
I think it’s always important to start with an understanding of users and what they need. Do some user research first. It’s even more important when designing for screenless experiences.
Rather than approaching it as designing an interface, I’ve realized it’s about designing a conversation. To do that, I need to understand the range of words and utterances used when we naturally talk about the topic.
Changes to content and words in a GUI are seen as low-cost and often (wrongly) left late in development. It’s the complete opposite for spoken interfaces — words are the interface. The sooner you can prototype a conversation and get testing with real people, the better informed your UI will be.
Initially, the prototype can be very low fidelity — simply recorded conversations and discussions. Compared to designing a traditional GUI, it would be the equivalent of a paper sketch — quick to do and fast to change.
There are lots of tools out there to help you make sense of this research: flow diagrams, word clouds, and bot simulators. And post-it notes are easy to create, sort, and change, all before any code is written.
– Hilary Brownlie, service designer