Illustration by Tracy Dai.
For years I have been observing and writing about the rise of our digital identities—the increasing fidelity of them, along with our growing dependence on this virtual presence to exist in the world. We rely on our digital selves to communicate with others, to form our offline identities, and to understand each other and the world around us. We are entering into an era where the human identity becomes an inseparable composite of both digital and physical. So what does the world look like after those selves have merged?
This symbiotic human-machine, something I call the Meta Me, is the co-mingling of our digital and analog selves. It takes the modern human being and attaches a digital construct, resulting in a unified, fully integrated digital self. Carried to the ideal, it becomes a studied, intentional extension of ourselves—an extension that we’ve actively invested in. You can instrument your digital self to be the very thing you need to be at any time: the funniest guy at the party, the best conversationalist in an interview, the most charming version of yourself on a first date.
A digital mirror
I chose that word, “meta,” because as I see it, this growing digital identity is not simply an additional, pixelated layer to our self—it is our new greater self.
When we say “digital identity” there’s a separate-ness implied. It suggests there is an actual identity—the one we carry around in flesh and bones—and another digital version. Until lately, that was a fair way to look at how digital affected our lives. But given social changes accelerated by the current pandemic, thinking of these two sides of the self as separate suddenly feels quaint.
For anyone navigating social isolation, there is no longer a separate native self that can realistically be considered independent of the digital self.
I can’t speak to 99% of the people I know without a digital conduit. I can’t understand 99% of the world around me outside of the digital systems that deliver that information. Those channels are now my eyes and ears to the world. What digital has afforded us, good and bad, is deeply in charge of how we understand ourselves and the rest of the world.
We didn’t have to wait for a global pandemic for this to happen. We were already well on our way to enriching, or even replacing, much of our interface to the world with digital intermediaries. We get our news, entertainment, food, work, even driving directions from mediated systems like Facebook and Twitter. You’ve heard this story before. It’s obvious that these systems influence us. Most of the talk about “smart” computers or digital assistants is about knowledge delivery to a person. But what if the AI is actually just a digital extension of you? If you buy the premise that we know the world and are known by the world through these systems, then to an increasing degree, these systems are us.
The point sounds abstract even as I write it—“these systems are us”— yet the impact of this idea is increasingly visceral. In 2017, the Department of Homeland Security confiscated an American couple’s mobile devices and demanded their passwords at the Canadian border, claiming it was the government’s right to search belongings if they had probable cause to do so. The ACLU sued on behalf of the couple, saying that a phone is a far more personal item than a bag or suitcase, and that a person’s digital life (as represented on their phone) should be included in privacy rights provided by the Fourth Amendment. It’s a wild early test of the premise that these systems are not just external references. They are us. And if that rings true, we should tend to the rights and care for them as our traditional idea of self.
The notion that externalities define us is not new. In 1890, the philosopher William James wrote that “my experience is what I agree to attend to.” Even then we defined ourselves by the experiences and sources of information we chose along with the means of expression available to us. But James did not envision today’s world.
My experience is what Google decides I should see, what Facebook’s algorithm delivers, what the thousands of data brokers that have compiled my digital identity allow me to be.
In a digitally driven world, only part of me is the immutable flesh and mind. The systems owned by other parties define much of me—and you.
Built to be bought and sold
What does it mean to be defined, even owned (albeit digitally), by a handful of companies? At the moment, the systems have largely been built with a singular purpose—to make us shop. Our online personas are fuel for an advertising behemoth that views us as a commodity. More generously, it is focused on our role in an economy. As the artist and computer scientist Jaron Lanier so bluntly stated, “we are the product.”
We are given access to information and interfaces in exchange for being watched, analyzed and sold to. At first it was a bargain. The clumsiness of early systems masked the cost of this exchange. It also financed investment in systems that would have been impossible to sell directly to users—for example, think of the cost involved in launching satellites into space and setting up massive cloud systems in order to create Google Maps. But this exchange is now out of balance.
Online platforms are training our society to think of an individual’s core purpose as a consumer in the economy.
Even the term “consumer” has become a commonly accepted way to talk about “people” in the collective sense. I remember reading early editions of Wired magazine where every issue seemed to breathlessly describe how digital would drive innovations in society. We got lost along the way.
Despite the problems, there is no going back. Digital systems, that is the platforms and the companies that own them, will continue to define us. And the scale and complexities of those systems means they will be created and managed by a handful of large companies. Simply put, they will continue to own our digital identities. But this should be viewed as a trough to cross, rather than a mistake that needs undoing. What we can hope for—no, what we must demand—are systems and algorithms that work in our interest, not the motives of advertisers or governments.
A future you
A fully formed Meta Me, working for me, could be a beautiful and useful extension of our humanity. We talk of personal assistants. Imagine that assistant is a near-perfect digital expression of you. A digital you to shop, seek out love and friendship, look for the next stage in your career. It is a decision forecasting engine that can sample any number of possible futures for you, modeled on your own needs. Picture your nervous self on a date or job interview. Imagine that digital self, working diligently to help you with just the right conversation, insights, and knowledge. A little creepy? Certainly in ways. But what is a calculator or a quick google search in our hands but an extension of our own minds? What I imagine is the same but finely tuned to our own self.
Taking back our identity will require finding better ways to pay for it—getting beyond an ad-driven world. Yet once we finally take ownership of our digital self, we will find it offers us advantages that connect with our real lives.
Digital identity systems can offer value beyond shopping and the performative nature of social media.
It can serve up content to fulfill personal goals—maybe to learn more, succeed at work, to be healthier and happier. Offline we might establish the trust to allow civic-scale data-modelling of ourselves for this digital identity to be actively supportive of how we work, travel, and feel safe in the larger public sphere. Right now our personal privacy and the fear of government control keeps this topic off limits from more creative possibilities.
If half of my life is online (especially at this time of social-distancing, where it feels far more than half), then I must own and control my digital self. The Meta Me should belong to me. In the modern digitally-fueled world, it is me.