Illustration by Nayane de Souza Hablitzel

Bojana Coklyat was surfing the web when she happened upon The Deep Sea. A digital journey into the depths of ocean life, the interactive website starts near the water’s surface, where manatees and clownfish live. Scroll down, and as the color of the water darkens new creatures—the chain catshark, the terrible claw lobster (its real name)—appear. Before you’re even one full nautical league under the sea, you’ve passed the deepest point of exploration ever reached by a human scuba diver. It’s a spine-tingling pitch black down there, home to bioluminescent anglerfish and transparent telescope octopus.

Coklyat absorbed all of this without technically seeing it. “As you scroll down” (in Coklyat’s case, via Apple’s VoiceOver screen reader) “you have alt text saying, ‘this is a so-and-so fish,’ ‘there’s a gradation of color that’s changing,’ and, ‘you’re this many feet below the water,’” she says. “My imagination was so captured. People forget about how much imagination matters for [low-vision experiences], but this designer was really thinking about how to make this immersive.”

An accessibility consultant for arts organizations, Coklyat knows that thoughtful alt text is the exception, not the rule. To navigate the web, blind and vision-impaired people rely on screen readers, which narrate text and need alt text—a layer of meta-data invisible to sighted users—to describe images, buttons, and graphics. In a perfect version of the internet, every website’s visuals would translate elegantly through a screen reader so that vision-impaired users can listen to a clear, delightful version of what sighted users see. The internet being an imperfect place, that doesn’t happen often. Content creators don’t always consider screen reader users, and many websites aren’t designed to accommodate the technology.

Illustration of alt-text taken from the New York Times
Illustration for the New York Times “What It’s Like to Use Facebook When You’re Blind” (Jan. 17, 2020). Image credit Ben Barry.

That’s not to say digital accessibility hasn’t made significant strides. The original 2005 Web Content Accessibility Guidelines get updated frequently and offer a free blueprint for building a more usable website. News organizations like Fast Company and the New York Times are publishing op-eds that push for a more equitable internet. Conferences (like CSUN’s Assistive Technology Conference) and countless consultancies have sprung up to help companies design for low-vision or hearing impaired users, and big tech companies are releasing new tools that allow more users to access more information. Last fall, Google Accessibility released a spate of apps and features. One of them integrates detailed voice guidance and announcements into Google Maps, so that those with vision impairment can walk to a destination without taking a wrong turn or stepping into a busy intersection.

Google Assistant Accessibility

When companies don’t deliver

But for every new app, there’s a legion of companies not even bothering. “You might want to start with the Domino’s Pizza case,” says computer scientist Jonathan Lazar, referring to a highly publicized recent court case in which a blind man sued Domino’s for not making its website or app available to screen reader users. In an effort to avoid regulation and, essentially, more UX work, Domino’s kept the case going, arguing that the Americans with Disabilities Act doesn’t apply to online spaces. The Supreme Court ultimately denied Domino’s petition.

“The obvious thing here is that it would have been much cheaper to just make the website accessible,” says Lazar, who’s also associate director of the Trace Research and Development Center at the University of Maryland.

People often say accessibility is expensive. That’s because if you’re going back to retrofit something, that’s expensive. But if we can initially involve people with disabilities in the process, then costs are much smaller.

Websites could also work better—for everyone. Picture a typical website, and the cacophony of banner ads, video pop-ups, and competing links that crowd it. If a sighted user has a hard time parsing that noise, imagine the page once it’s translated into literal noise, in the form of narration from assistive technology. Even if each of those components gets properly coded with alt text (and that’s a big if; in a more typical experience, links and buttons will sound off as “link” and “button”), they disrupt the overall experience. But someone who understands that perspective could steer a design team in another direction. Lazar says designers and developers would do well to operate this way, by asking themselves not just how to make information more accessible, but how to make it more flexible. “It’s not always as straightforward as you might think,” he says. “Think about auto-saving your password, or autocorrect—those are accessibility features.”

Making accessibility a company-wide priority

For now, though, thoughtfully implemented alt text is still the best bet for making websites work for those with impaired vision. That said, figuring out which sites have alt text is like reaching into a grab bag: you just never know. Coklyat describes an instance of exploring the Whitney Museum of American Art’s online exhibits. The pieces in Vida Americana, the museum’s current show of Mexican-American mural art, came with evocative alt text, while the Whitney’s permanent collection had none. “It makes me think about how decentralized these things are,” Coklyat says. “Whoever is curating that one exhibit got the message, but the people who worked on the permanent collection somehow didn’t.”

That randomness gets multiplied on image-heavy social media sites, where different users—with different levels of accessibility awareness—create each post. Tech companies can influence how users approach alt text, simply by designing interfaces that make alt text fields more prominent. Companies accept that responsibility to varying degrees: Facebook uses facial recognition and automatic alt text to caption photos, and Instagram offers a clear field for writing alt text (that is, if you know to click through the advanced settings). Meanwhile, on Reddit, bands of volunteers add elaborate alt text descriptions to bring images and memes to life for non-sighted Redditers.

As a baseline measure, writing alt text captions will include more users. But the quality of that writing matters, too. Imagine perusing Instagram through a screen reader, and hearing an image described as “a lake,” versus “kayaks float across a bright blue lake on a sunny day.” Longer alt text captions aren’t necessarily better, but carefully worded ones can offer crucial context. Coklyat recalls reading an article on Bitch Media in which alt text described “a white hand, holding a such-and-such”—a detail that brought part of the subject’s identity into focus.

While digitally touring the Whitney, Coklyat says that not only did Vida Americana have alt text captions, but that they captured the spirit of the artwork. She compares it to The Met’s online experience, which includes alt text, but keeps the tone cut-and-dry. “The Met is like ‘this is a vase, the vase is on a table.’ It’s an old school way of doing alt text that doesn’t want to influence you,” she says. “With the Whitney, there was an imbued feeling. I got a sense of the lushness of the person holding the flowers in a Diego Rivera painting. It doesn’t have to just be a chore or a compliance thing; alt text can also be a creative layer of information.”