A good tool is an invisible tool. By invisible, we mean that the tool does not intrude on your consciousness; you focus on the task, not the tool.” It is a famous quote from Mark Weiser, a Xerox PARC researcher and the father of ubiquitous computing. This quote perfectly summarizes the basic requirements for the user interface.

For a long time, human-computer interaction required a mouse, keyboard, and touch as inputs. While interfaces created with those types of input are good, they don’t necessarily feel natural because there is always something in-between the user and the content they interact with. Users have to press buttons, type on keyboards, or swipe the screens of mobile devices to complete the operation. But rapid technological processes have made it possible to create the next generation of interfaces using natural interactions. The next- generation of UI adds touch-less control, enabling users to communicate with machines through speed and gesture.

Gestures are already an essential tool for mobile devices. In this article, I’ll discuss the concept of natural user interface, gesture recognition UX and talk about specific things you need to remember when designing gesture-based UI.

What is natural user interface?

Natural User Interface (NUI) is an intuitive interface that completely eliminates the need for mechanical devices such as a keyboard or mouse. Natural user interface requires no tools other than the ones you were born with (our hands and our voice). The advantage of NUI is that it’s pretty intuitive and the user does not have to specifically learn how to work with it. Gestures will take a central stage in NUI.

Designers often inherit aesthetic ideas about technology from cinema, and a good example of NUI was introduced in the movie Minority Report. In this movie, John Anderton (played by Tom Cruise) interacted with a computer by waving his hands. The NUI interface in this movie was based on gesture recognition.

Tom Cruise using interactive computer
Tom Cruise interacting with a computer in the movie Minority Report.

What is gesture recognition UX?

Gesture recognition uses computer sensors to detect and understand human gestures and movements. Gesture recognition isn’t a new concept; take the iPhone as an example. With this device, gestures let users interact with screen elements using touch. The next generation of UI, however, will be a touch-free interaction paradigm taking users to an entirely new level of engagement.

In the last few years, the rapid progress of gesture recognition technologies along with the falling cost of sensors allowed product designers to introduce a whole new spectrum of solutions based on gesture recognition. Some products that support touch-free interactions are available on the market today. The 2016 BMW 7 Series,for example, is the first production car with gesture control. It understands a set of basic gestures commands and allows the user to add their own custom gestures such as clockwise moving of a finger to change the volume.

gif of BMW gesture control
Hand using gesture control in a BMW 7 Series

At the same time, new input methods require new design principles. When it comes to product development, it’s vital to understand what types of gestures we need to support and how to introduce them properly.

Type of gestures

When it comes to classifying different types of gestures, Material Design defines three types:

  • Navigational gestures. Gestures that help users move through a product easily
  • Action gestures. Action gestures can perform actions such as scrolling.
  • Transform gestures. Gestures that allow users to transform objects such as an element’s size, position, and rotation using gestures.

Design principles of touch-free gesture-based interfaces

The next generation of UI can not blindly inherit the old principles of interaction design. Designers can take into consideration existing concepts, but they need to adapt them according to the new type of interaction. Here are a few important rules that you need to remember when working on gesture recognition UX:

Avoid using the WIMP or touch-based models

One of the common pitfalls that many UX designers fall into is using a mouse + keyboard model for the gesture-based interface. Designers often rely on WIMP (Windows, Icons, Menus, Pointers) — a standard model for desktop apps—and replace the mouse pointer with a human finger. This model does not apply to gesture-based interfaces because it fails to account for natural human motions. Since it’s based on using cursors, it will make the interaction uncomfortable for a person.

It’s also not recommended to apply touch-based paradigms to hands-free design. What works for touch may not necessarily work hands-free design.

Make interaction comfortable

Making interaction with UI comfortable should be a top priority for designers. Since users will interact with their arms, you need to ensure that users’ arms don’t tire quickly from having to interact with UI. Here are a few things to remember:

  • Make individual gestures comfortable for the user. Consider the human body ergonomics when creating UI. If a gesture is uncomfortable or too repetitive, the experience won’t be great for users, and there’s a high chance that the user will abandon the product.
  • Avoid gestures that require a lot of physical work. When interacting with UI, a user has to do a lot of movements (especially ones that involve gesturing with hands above your heart), that can quickly become annoying. Games and physical exercises are an exception to this rule. When users interact with Nintendo Wii a lot of movement can be really positive.
  • Take user session into account. Gesture control interfaces are great for short periods, but they quickly fail under long timelines.

Design intuitive gestures

Gestures are hidden controls, and this can cause problems for UI designers. Back in 2010, Don Norman from NNG drew attention to the problem: “Because gestures are ephemeral, they do not leave behind any record of their path, which means that if one makes a gesture and either gets no response or the wrong response, there is little information available.” Still, in 2019, we don’t have a universally-accepted language of gestures that we can rely on when designing interfaces.

The learning curve for interactions can be problematic for gesture-driven interfaces. That’s why it’s recommended to only use intuitive gestures so users don’t have to learn a special gesture language.

Here are a few simple tips for you:

  • Borrow gestures from real life. Try to emulate sign language or borrow gestures from it to perform actions. Observe how users naturally move their arms and hands and introduce these patterns in your UI.
  • Avoid using complex gestures. Users don’t want to memorize combinations to make an action.

Educate users

Even with intuitive gestures, users have to know what is possible and what’s not. Since they don’t have this information from the start, you need to educate them. The simplest, most effective way to educate users is through animations. For example, you can introduce hint motions that provide visual clues about possible interactions.

screenshot of google animation
Google uses animated fox to guide user.

It’s also worth remembering that the UI element’s visual appearance and behavior should indicate if gestures can be performed on it. Provide visual cues that indicate that a gesture can be performed, such as showing the glowing effect for object surfaces to suggest it may be pulled into view.

view of gesture control
The glowing effect for a selected object ensures users the object is selected. Image by 3D Action Group

Design for accessibility

Gesture-based interfaces can be less accessible. Some gestures might not be possible for users with disabilities—not everyone has the fine motor control to perform gestures with accuracy. So make sure you support assistive technology devices like joysticks or electronic pointing devices.

Provide realistic responses

UI elements should respond to gestures in real-time to express direct user control over touch interactions. That’s why when you design gestures, they should be aimed at both simplicity and high responsiveness. Remember that users will notice any lag between the gesture and the actual response of the UI. Thus you need to ensure the latency in the UI response is minimal (0.1 seconds).

Define active interaction zone

Make sure everything the user wants to access is placed in a useable range. Place interactive objects in the area that are most comfortable for the user.

gif of interaction zone
Active interaction zone should be comfortable for the user. Green area allows users to see the interactive objects in details. Image by Google Design Guidelines.

Leverage the power of 3D

If your product has 3D interactive models, content should be enhanced to support viewing objects from all sides. Allow users to use gestures to see the object from different sides. Take, for example, an app for car mechanics. Users should be able to pick up a particular element of the car and rotate it to view it from all sides. Users should be able to zoom in and rotate an object by just turning their wrist.

Anticipatory design

Anticipatory design is based on the idea of predicting user intent. When it comes to touchless gesture-based UI, you have more tools to predict what your users are trying to achieve. You can predict the next step based on the user’s focus and recent actions. For example, if we design an app for car mechanics, we can create an UI that deconstructs a particular element of a car engine and shows how its components fit together when a user is examining it.

Conclusion

Gesture recognition UX offers fantastic opportunities to change the way we interact with computers. We are only at the beginning of a new computer era where people will communicate with devices the way they do with each other. Touchless interactions will bring in a fresh perspective on the human-computer interaction paradigm and result in truly unique user experiences.