At the outset of my first full-time job post-college, I took an online intro to graphic design course. The course mostly summarized functional and stylistic aspects of design—leading and kerning, gestalt principles, color theory—but it also included a section centered on what I’ll call “design ethics.”
Good design, I learned, has a clarifying purpose. It displays information in a manner that helps a viewer quickly recognize what it means and understand how to interact with it: a pull-door with a pull-handle, for example. Bad design, on the other hand, confuses viewers by arranging information in a manner that fails to communicate its purpose or importance: a push-door with a pull-handle.
Of course, rarely are things that simple. A shadowy third category lurks beyond the intentionally good and the mistakenly bad. This, the course outlined, is the intentionally bad, the technically effective but ethically dubious, the ominously labeled: “dark patterns.”
Dark patterns, now dubbed “deceptive patterns,” are user interfaces purposely designed to manipulate or coerce people into taking—or not taking—a specific action. Think, pop-up ads where the “X” to close the window is obscured by a background of the same color, making viewers more likely to click on the ad itself. Or auto-resetting countdown timers on clothing websites that make shoppers feel like they have to purchase an item “NOW!” or risk losing a hot deal. To return to my door example, a dark pattern is a locked door with a note appended to it reading, “Meet me by the water cooler with $100 and we’ll discuss whether you can leave.”
Of course, these practices are as unprofessional as they are unethical. In theory, companies who employ them do so at their own reputational peril, as people fooled once won’t want to be fooled again.
In practice, deceptive patterns abound. And they’re not only prevalent in the internet’s most cobwebbed corners: They’re everywhere, perpetuated by the world’s biggest, most successful companies.
Take Google, for example. If you’ve ever accidentally clicked “visit site” on an email ad you intended to delete, don’t blame yourself: Gmail conveniently places that button in the same location as the “delete” button on non-ad emails in the Promotions folder, so if you’re on an item-by-item deleting spree, you can’t miss.
YouTube, owned by Google, opts for a more toddlerian strategy: Withhold a nice thing until viewers do what it wants. If you turn off your “watch history” setting, your home screen—once full of videos—becomes a blank page with one glaring message: “Your watch history is off. You can change your settings at any time to get the latest videos tailored to you.” This is followed by a button that simply reads, “update setting.”
Of course, YouTube could populate the homepage of viewers who opt out of personalized content with videos from creators to whom they subscribe or the day’s most popular videos—but that would give it less leverage for siphoning your sweet, sweet data.
There is a case to be made that we should expect to encounter practices like these when using services we don’t pay for. In the 2020 documentary, “The Social Dilemma,” Google design ethicist turned bearer of bad tech news Tristan Harris drove home the point that if we’re not paying for the product, we, ourselves, are the product. It’s how the ad-based business model works, and it’s most obvious on social media, where companies specialize in gaming the attention of their users who bring them big bucks by clicking on stuff.
But let’s put aside the dystopian implications of users being targeted by big tech via content feeds for a moment. What about the products and services we do pay for? Surely these are safe from manipulative practices.
Unfortunately, not necessarily. Even here, deceptive patterns have encroached, perhaps as a result of companies catching onto the success of these tactics in ad-based arenas.
One popular tactic: imposing up-front costs that lead to future hidden costs. This is prevalent in the video game industry, where gamers who pay up front for games are often nevertheless subjected to in-game microtransactions—nominally optional purchases that in some cases impact players’ ability to remain competitive within the game.
Another lucrative strategy: designing products that are easily breakable or difficult for the layman to repair. Independent repairman and right to repair advocate Louis Rossman regularly rails against Apple for this reason, drawing attention to design elements like the storage drives in Apple laptops. The drives are soldered onto the motherboard instead of being removable, meaning if the drive fails, the computer does too. A video from right to repair YouTuber Hugh Jeffries reveals that the latest iPhone punishes even those who try to repair their device using Apple’s own parts: Its software is programmed to detect parts not native to the original phone and turn off core features.
What does this mean for Apple customers? Likely, consulting a “genius” who, in their infinite wisdom, can inform them it’s time to buy a new Apple product.
How do companies get away with behaviors so obviously designed to turn their customers upside down and shake them for spare change? Simply put, customers keep buying. According to yahoo!finance, microtransactions “account for nearly 30% of the gaming industry.” According to Statista, 67 % of 18-29 year olds with smartphones own iPhones.
Maybe this is because today’s biggest brands have created their own matrices, self-contained ecosystems wherein users find a sense of identity or community by continuing to buy in, for example, by having the same phone or game as their peers. Our world may be increasingly globalized, but human beings are as tribal as we’ve ever been—if the village is global, we want to be part of it, and buying into global brands may seem like a good way to do it. The catch: Once brand loyalty is established, no purchase is simply a purchase. Instead, it’s a doorway to another purchase, each one making customers more dependent on the company that grants entry, which then has the leverage to squeeze a little more. The logical endpoint of this phenomenon is that products, once owned by us, own us.
“We don’t check our messages anymore: the messages notify us,” wrote Timandra Harkness in UnHerd. As technological capabilities increase and more of the items we own are connected to the internet and connected with one another, so does the potential for more complex and alluring webs of deceptive patterns to proliferate, foisting this backward dynamic into more areas of our lives.
Preserving our autonomy under these circumstances promises to be a daunting challenge in an attention-addled world. Not least of all because it requires becoming incredibly careful about what we buy—and what messaging we buy into.
Consider, for instance, the undying trend of appending the word “smart” to names of common items connected to the internet: “Smart-phone,” “smart-watch,” “smart-fridge,” “smart-home.” The implication is clear. Smart products are for smart people—people who are up-to-date on the latest trends, in the know about remaining relevant in a rapidly changing society. And if you’re not smart, you’re, well … you know.
This characterization is perfectly poised to prick at our anxieties by encouraging us to identify with the products we own and the services we subscribe to. More insidiously, it raises the possibility that if our products and services become outdated, we, ourselves, will cease to have purpose.
While I’m uncertain how to disrupt the encroaching hegemony of the Googles and Apples of the world on a policy level, individually we might start by rejecting the narrative they sell. This means cultivating a sense of belonging untethered to the products we buy—perhaps through directing our attention to our families, friends, and local communities. With a firm rootedness in real-world relationships, we may simply be less susceptible to these deceptive patterns. From a place of internal abundance, we may have the wherewithal to banish the worst products and services from our lives and maintain a proper emotional distance from the rest.
We might embrace dumb-phones, for instance. We might decline offers to “personalize our experience” and refuse to let home-assistant products assist us into incompetence. We might neglect to upload the most intimate moments of our lives on Instagram, or to purchase the latest and greatest smart-device just because someone somewhere called it the “latest and greatest.” We might trade needing more for wanting what we already have.
Maybe we accept the products we think we deserve. Maybe changing the status quo begins with realizing we deserve better.
Image via Flickr