Chatbots that become racist in less than a day, facial technology that fails to recognize users with darker skin colors, ad-serving algorithms that discriminate by gender and race, an AI hate speech detector that’s racially biased itself. Flawed artificial intelligence systems perpetuate biases, which can be largely attributed to the lack of diversity within the field itself, according to a report published by the AI Now Institute

In part 1 of this article, we covered first steps to removing bias in AI, including recognizing our own biases, building diverse teams, implementing harm reduction in the design and development process, and using tools to measure and mitigate risks. In part 2 we’ll explore gender and racial bias in particular, which AI often replicates, gain insights and practical tips to start reducing bias in AI experiences based on hands-on research, and explore a real-world project that has been created to end gender bias in AI assistants.

Critical Discussion of AI and gender bias

More than 100 million devices with Amazon’s Alexa assistant built in had been sold by January of 2019. Given Alexa’s scale, UX designer and creative strategist Evie Cheung was curious about the embedded gender and racial biases within the product. So she examined them by facilitating a co-creation workshop.

“The participants listened to Alexa’s voice telling a story and were instructed to draw what Alexa would look like as a human being,” she explains. “They were then asked questions about Alexa’s race, political beliefs, and hobbies.”

The view that emerged was one of Alexa as a subservient white woman who couldn’t think for herself, apologized for everything, and was pushing a libertarian agenda.

“In a vacuum, this is hilarious,” Cheung points out. “But children are now growing up with a device they’re able to order around, due to Alexa’s submissive personality and conversation design. Alexa’s ubiquity means that it has become a socializing force, influencing a child’s mental model on how they perceive female-sounding voices, and establishing a ‘norm’ for how technology is supposed to sound — in this case, female and inferior.” 

As designers, Cheung advises, we must be hyper-aware of perpetuating existing societal gender biases and predict how products may have detrimental impacts for future generations. To combat these biases, it’s imperative to diversify teams of designers and technologists (for more on diverse teams, see Part 1), as well as the groups of users that products are tested on. 

For more on Cheung’s research, check out her graduate thesis book Alexa, Help Me Be a Better Human: Redesigning Artificial Intelligence for Emotional Connection, based on a year-long investigation of AI as a tool to explore human psychology.

Evie Cheung gathers with thirteen professionals from seven different industries to discuss the future of AI.
In Evie Cheung’s workshop thirteen professionals from across seven industries gathered to discuss the future of artificial intelligence.

The first genderless voice for voice AI

Digital voice assistants often have two options for the gender the user prefers interacting with: male or female. Sometimes the default will be set differently to adapt to the culture the user is in. For example, in the U.S. Siri is female, while in the UK Siri has a male voice. 

“If you ask folks at Microsoft, Amazon, or Google, why so many of our voice assistants are female,” explains David Dylan Thomas, “they’ll tell you that according to their research, people are more comfortable hearing certain kinds of assistance or information from women than from men. On the one hand that seems like a good answer because we all live in the world of user experience, and we always say follow the research, but we also have to ask ourselves if we are okay with what the research is telling us. Is it a good thing that people are preferring to hear certain types of information from women, limiting how people view women? Are we okay with that and do we want to perpetuate it?”

A lot of the experts David talked to said you should leave it up to the user to decide if they want to hear a male or a female voice. Emil Asmussen, however, creative director of VICE Media’s agency Virtue, cautions that binary choice isn’t an accurate representation of the complexities of gender. 

“Some people don’t identify as either male or female, and they may want their voice assistant to mirror that identity,” he explains. “As third gender options are being recognized across the globe, it feels stagnant that technology is still stuck in the past only providing two binary options.

That’s why we created Q, the world’s first genderless voice for voice AI. Created for a future where we are no longer defined by gender.”

“The project is confronting a new digital universe fraught with problems. It’s no accident that Siri, Cortana, and Alexa all have female voices — research shows that users react more positively to them than they would to a male voice. But as designers make that choice, they run the risk of reinforcing gender stereotypes, that female AI assistants should be helpful and caring, while machines like security robots should have a male voice to telegraph authority. With Q, the thinking goes, we can not only make technology more inclusive but also use that technology to spark conversation on social issues.”

To find out more about the project, watch Emil’s talk at Design Matters, co-presented with designer Jacob Ziegler.  

Counteract ingrained racial bias and lie to AI

Informed by her conversations with over 30 machine learning engineers, creative technologists, and diversity and inclusion thought leaders, Evie Cheung has found that one of the most salient and urgent AI issues is biased algorithms — particularly around the topic of race. 

“We are still living through the consequences of colonialism, in which the western hegemony violently established power over the rest of the world,” Cheung explains. “These racial biases are thoroughly ingrained in society, and have the potential to be exacerbated by algorithms, such as in the criminal justice system. Significant problems include the lack of unbiased historical data, an unbalanced workforce, and limited user testing. These factors result in products like Facebook’s racist soap dispenser and Google’s image recognition algorithm that classified black folks as gorillas.”

Cheung says that we need to acknowledge the glaring truth: history is racist because humans are racist. And thus, algorithms powered by that historical data will also be racist. 

“In the creation of AI algorithms, products, and services, designing equally for all groups is not good enough,” Cheung points out. “We need to include diverse voices who aren’t traditionally included in conversations about rising technologies. We also need to make sure that the data sets used are representative of the population that the respective algorithm will be used for. “

David Dylan Thomas agrees that any bias in AI comes from its creators. “Often these creators will try to de-bias their AI by pointing it at ‘the real world’,” he explains. “They’ll use data sets to train the AI that are based on real-world statistics. This may seem like a logical approach, but what if those data sets represent a racist world? If you were to ask an AI who is most likely to own a home based on current statistics it will tell you ‘a white family’. If you were to ask an AI who is most likely to go to jail based on current statistics it will tell you ‘a black man’. It’s very easy to turn that into recommendations for who should own a home or go to jail — it’s happened before.”

David suggests we need to start looking at the world we want and not the world we have when creating these data sets. 

“We have to lie to AI. Give it data sets that favor equity. That overrepresent for the underrepresented. If we don’t, we risk scaling the bias that already exists.”

Four steps to reducing bias in AI experiences

Content strategist and co-founder of Rasa Advising, Julie Polk, currently a content lead for AI applications at IBM, has come up with four essential tips you should keep in mind to combat bias in AI:

  • It’s not enough to edit your final results. No matter how many images or phrases or search results you eliminate in one instance, they’ll show up again unless you address the underlying bias that produced them. It’s like whack-a-mole without the weird furry carnival prizes.
  • Require gender-neutral language in your style guide. Institutionalize words and phrases like “Hi everyone,” instead of “Hi guys,” “Chair” instead of “Chairman,” or “first-year” instead of “freshman.” I’ve been doing this work for ten years, and I’m still amazed at how pervasive and deeply embedded the assumption of male-as-neutral is. These seem like small changes, but taken together, they shift the entire context of our cultural conversation.
  • Vet your data. Garbage in, garbage out, always and forever. So dig around into how your data was generated before you build on it. If it’s research, who conducted it? Why? Who funded it? Who were the subjects? How were they chosen? What was the sample size? If it’s historical data, who does it include? More importantly, who does it exclude?
  • Don’t get sucked into solutions at the expense of inclusion. The speed and power of AI are seductive; anyone with a laptop, a skill set, and a creative mind can change how we live almost overnight. But nothing — so far, at least — can replace the human ability to understand the nuances of…well, of being human. And the biggest, shiniest solution, no matter how well-intentioned, isn’t a solution at all if it leaves damage in its wake.

Self-regulate to reduce consumer harm

Removing bias in AI and preventing it from widening the gender and race gap is a monumental challenge but it’s not impossible. From the Algorithmic Justice League to the first genderless voice for virtual assistants, there are many excellent projects that have the common goal of making AI fairer and less biased. But we need to work together, and if we include AI in a digital product, it’s every stakeholder’s responsibility to ensure it doesn’t discriminate or harm people. As Evie Chung says, “We must stay vigilant about the unintended consequences of the design decisions we make in AI-powered products.” Only then will we be able to maximize AI’s true potential to transform our lives.