A few months ago, the world was shocked to learn that Cambridge Analytica had improperly used data from a harmless-looking personality quiz on Facebook to profile and target the wider audience on the platform with advertisements to persuade them to vote a certain way. Only part of the data was obtained with consent (!), the data was stored by the app creator (!!), and it was sold to Cambridge Analytica in violation of terms of use (!!!). This was an example of black hat design — a deceptive use of persuasion tactics, combined with unethical use of personal information.

On the other hand, the last time you shopped on eBay, you may have noticed the use of multiple design elements encouraging you to buy an item (“last item”, “3 watched in the past day”). While these design techniques are used to persuade users, they are usually not deceptive, and are considered white hat techniques.

When we try to convince someone of a point of view or win that next design client or project, chances are we are using persuasion.

A third example comes from Google I/O 2018 last month, when the world heard Google Duplex made a call to a salon for an appointment, and carried out a fluent conversation mimicking human mannerisms so well that the person on the other end did not realize she was talking to a machine. The machine did not misrepresent itself as human, nor did it identify itself as a machine, which, in my book, puts it in a gray area. What’s stopping this from being used in black hat design in the near future?

As you see from the three examples above, the use of persuasive tactics can fall anywhere on a spectrum from black hat at one extreme to white hat at the other, with a large fuzzy gray area separating the two. In today’s world of online and email scams, phishing attacks, and data breaches, users are increasingly cautious of persuasive tactics being used that are not in their best interest. Experience designers, developers, and creators are responsible for making decisions around the ethical nature of the tactics we use in our designs.

This article will present a brief history of persuasion, look at how persuasion is used with technology and new media, and present food for thought for designers and developers to avoid crossing the ethical line to the dark side of persuasion.

History of persuasion

Persuasion tactics and techniques are hardly new — they have been used for ages. Aristotle’s Rhetoric, over 2,000 years ago, is one of the earliest documents on the art of persuasion. The modes of persuasion Aristotle presented were ethos (credibility), logos (reason), and pathos (emotion). He also discussed how kairos (opportune time) is important for the modes of persuasion. Fast forward to today, and we see persuasion methods used in advertising, marketing, and communication all around us. When we try to convince someone of a point of view or win that next design client or project, chances are we are using persuasion, a process “by which a person’s attitudes or behavior are, without duress, influenced by communications from other people” (Encyclopedia Britannica).

While Aristotle first documented persuasion, Robert Cialdini’s “Influence: The Psychology of Persuasion” is more commonly referenced when talking about modern persuasion. According to Cialdini, there are six key principles of persuasion:

  1. Reciprocity. People are obliged to give something back in exchange for receiving something.
  2. Scarcity. People want more of those things they can have less of.
  3. Authority. People follow the lead of credible, knowledgeable experts.
  4. Consistency. People like to be consistent with the things they have previously said or done.
  5. Liking. People prefer to say yes to those that they like.
  6. Consensus (social proof). Especially when they are uncertain, people will look to the actions and behaviors of others to determine their own.

We have all been exposed to one or more of these principles, and may recognize them in advertising or when interacting with others. While that has been around for ages, what is relatively new is the application of persuasion techniques to new technology and media. This started off with personal computers, became more prominent with the internet, and is now pervasive with mobile devices.

Persuasion through technology and new media

Behavior scientist B.J. Fogg is a pioneer when it comes to the role of technology in persuasion. Over two decades ago, he started exploring the overlap between persuasion and computing technology. This included interactive technologies like websites, software, and devices created for the purpose of changing people’s attitudes or behaviors. He referred to this field as “captology,” an acronym based on computers as persuasive technologies, and wrote the book on it, “Persuasive Technology: Using Computers to Change What We Think and Do.”

Captology describes the shaded area where computing technology and persuasion overlap (recreated from BJ Fogg’s CHI 98 paper “Persuasive Computers”).
Captology describes the shaded area where computing technology and persuasion overlap (recreated from BJ Fogg’s CHI 98 paper “Persuasive Computers”).

Interactive technologies have many advantages over traditional media because they are interactive. They also have advantages over humans because they can be more persistent (e.g., software update reminders), offer anonymity (great for sensitive topics), can access and manipulate large amounts of data (e.g., Amazon recommendations), can use many styles and modes (text, graphic, audio, video, animation, and simulations), can easily scale, and are pervasive. This last advantage is even more pronounced today, with mobile phones being an extension of our arms, and an increased proliferation of smart devices, embedded computing, IoT, wearable technology, augmented reality, virtual reality, and virtual assistants powered by AI being embedded in anything and everything around us. In addition, today’s technological advances allow us to time and target moments of persuasion for high impact, since it is easy to know a user’s location, context, time, routine, and give them the ability to take action. This could be a reminder from your smartwatch to stand or move, or an offer from the coffee shop while you are a few blocks away.

Ethics, new technology, and interactive media

The use of persuasion in traditional media over the past decades has raised questions around the ethical use of persuasion. With new media and pervasive technology, there are more questions about the ethical use of persuasion, some of which are due to the advantages pervasive technology has over traditional media and humans. Anyone using persuasive methods to change people’s minds or behavior should have a thorough understanding of the ethical implications and impact of their work. One of the key responsibilities of a designer during any design process is to be an advocate of the user. This role becomes even more crucial when persuasion techniques are intentionally used in design, since users may be unaware of the persuasion tactics. Even worse, some users may not be capable to detect these tactics, as may be the case with children, seniors, or other vulnerable users.

BJ Fogg provides six factors that give interactive technologies an advantage over users when it comes to persuasion:

  1. Persuasive intent is masked by novelty. The web and email are no longer novel, and most of us have wizened up to deceptive web practices and the promises of Nigerian princes, but we still find novelty in new mobile apps, voice interfaces, AR, and VR. Not too long ago, the craze with Pokémon Go raised many ethical questions.
  2. Positive reputation of new technology. While “It must be true — I saw it on the internet” is now a punchline, users are still being persuaded to like, comment, share, retweet, spread challenges, and make fake news or bot-generated content viral.
  3. Unlimited persistence. Would you like a used car salesman following you around after your first visit, continually trying to sell you a car? While that thankfully does not happen in real life, your apps and devices are with you all the time, and the ding and glowing screen have the ability to persistently persuade us, even in places and times that may be otherwise inappropriate. This past Lent, my son took a break from his mobile device. When he started it after Easter, he had hundreds of past notifications and alerts from one mobile game offering all sorts of reminders and incentives to come back and use it.
  4. Control over how the interaction unfolds. Unlike human persuasion, where the person being persuaded has the ability to react and change course, technology has predefined options, controlled by the creators, designers, and developers. When designing voice interfaces, creators have to define what their skill will be able to do, and for everything else come back with a “Sorry I can’t help with that.” Just last month, a social network blocked access to their mobile website, asking me to install their app to access their content, without an escape or dismiss option.
  5. Can affect emotion while still being emotionless. New technology doesn’t have emotion. Even with the recent advances in artificial intelligence, machines do not feel emotion like humans do. Back to the Google Duplex assistant call mentioned at the beginning, issues can arise when people are not aware that the voice at the other end is just an emotionless machine, and treat it as another person just like them.
  6. Cannot take responsibility for negative outcomes of persuasion. What happens when something goes wrong, and the app or the technology cannot take responsibility? Do the creators shoulder that responsibility, even if their persuasion strategies have unintended outcomes, or if misused by their partners? Mark Zuckerberg accepted responsibility for the Cambridge Analytica scandal before and during the congressional hearings.

With these unfair advantages at our disposal, how do we as creators, designers, and developers make ethical choices in our designs and solutions? For one, take a step back and consider the ethical implication and impact of our work, and then take a stand for our users. Many designers are pushing back and being vocal about some of the ethically questionable nature of tech products and designs. There’s Tristan Harris, a former Google design ethicist, who has spoken out about how tech companies’ products hijack users’ minds. Sean Parker, Napster founder and former president of Facebook, described how Facebook was designed to exploit human “vulnerability.” And Basecamp’s Jonas Downey ruminates on how most software products are owned and operated by corporations, whose business interests often contradict their users’ interests.

Design code of conduct

AIGA, the largest professional membership organization for design, has a series on Design Business and Ethics. “Design Professionalism” author Andy Rutledge also created a Code of Professional Conduct. Both are very detailed and cover the business of design, but not specifically ethics related to design that impacts or influences human behavior. Other professionals who impact the human mind have ethical principles and codes of conduct, like those published by The American Psychological Association and the British Psychological Society. The purpose of these codes of conduct is to protect participants as well as the reputation of psychology and psychologists themselves. When using psychology in our designs, we could examine how the ethical principles of psychologists are applicable to our work as creators, designers, and developers.

Principles and questions

Using the Ethical Principles of Psychologists as a framework, I defined how each principle applies to persuasive design and listed questions related to ethical implications of design. These are by no means exhaustive, but are intended to be food for thought in each of these areas. Note: when you see ‘design’ in the questions below, it refers to persuasive techniques used in your design, app, product, or solution.

Principle A: Beneficence and nonmaleficence

Do no harm. Your decisions may affect the minds, behavior, and lives of your users and others around them, so be alert and guard against misusing the influence of your designs.

  • Does your design change the way people interact for the better?
  • Does the design aim to keep users spending time they didn’t intend to?
  • Does the design make it easy to access socially unacceptable or illegal items that your users would not have easy access to otherwise?
  • How may your partners (including third party tools and SDKs) or “bad guys” misuse your design, unknown to you?
  • Would you be comfortable with someone else using your design on you?
  • Would you like someone else to use this design to persuade your mother or your child?

Principle B: Fidelity and responsibility

Be aware of your responsibility to your intended users, unintended users, and society at large. Accept appropriate responsibility for the outcomes of your design.

  • During design, follow up with answers to “How might we…?” with “At what cost?”
  • What is the impact of your design/product/solution? Who or what does it replace or impact?
  • If your design were used opposite from your intended use, what could the impact be?
  • Does your design change social norms, etiquette, or traditions for the better?
  • Will the design put users in harm’s way or make them vulnerable, intentionally or unintentionally (Study Estimates That Pokémon GO Has Caused More Than 100,000 Traffic Accidents)? How can it be prevented?

Principle C: Integrity

Promote accuracy, honesty, and truthfulness in your designs. Do not cheat, misrepresent, or engage in fraud. When deception may be ethically justifiable to maximize benefits and minimize harm, carefully consider the need for, the possible consequences of, and be responsible for correcting any resulting mistrust or other harmful effects that arise from the use of such techniques.

  • Do you need users’ consent? When asking for their consent, are they aware of what exactly they are consenting to?
  • What’s the intent of the design? Is it in the best interest of the user or the creator? Are you open and transparent about your intentions?
  • Does your design use deception, manipulation, misrepresentation, threats, coercion, or other dishonest techniques?
  • Are users aware or informed if they are being monitored, or is it covert?
  • Is your design benefiting you or the creators at the expense of your users?
  • What would a future whistleblower say about you and your design?

Principle D: Justice

Exercise reasonable judgment and take precautions to ensure that your potential biases and the limitations of your expertise do not lead to or condone unjust practices. Your design should benefit both the creators and users.

  • Does your design contain any designer biases built in (gender, political, or other)?
  • Does your design advocate hate, violence, crime, or propaganda?
  • If you did this in person, without technology, would it be considered ethical?
  • What are the benefits to the creators / business? What are the benefits to the users? Are the benefits stacked in favor of the business?
  • Do you make it easy for users to disconnect? Do users have control and the ability to stop, without being subject to further persuasion through other channels?

Principle E: Respect for people’s rights and dignity

Respect the dignity and worth of all people, and the rights of individuals to privacy and confidentiality. Special safeguards may be necessary to protect the rights and welfare of vulnerable users.

  • Are your designs using persuasion with vulnerable users (children, seniors, the poor)?
  • Does your design protect users’ privacy and give them control over their settings?
  • Does the design require unnecessary permissions to work?
  • Can your design use a less in-your-face technique to get the same outcome (e.g., speed monitors on roads instead of surveillance)?
  • Does your design make your users a nuisance to others? How can you prevent that?

Conclusion

If you have been designing with white hat techniques, you may appreciate the ethical issues discussed here. However, if you have been designing in the gray or black area, thank you for making it all the way to the end. Ethics in persuasive design are important because they don’t prey on the disadvantages users have when it comes to interactive technology. As creators, designers, and developers, we have a responsibility to stand up for our users.

Do good. Do no harm. Design ethically.

Resources

Books

Links