By Ari Ezra Waldman
The asymmetry between our stated privacy preferences and our actual disclosure behavior is called the “privacy paradox.” In surveys and interviews, we say we care about our privacy and profess a desire to limit our disclosures. But our observed behavior is quite different: we share highly intimate information for meager or no rewards—for $3 Starbucks gift cards, minor convenience, or just because a website asked.
Explaining the “privacy paradox” is fundamental quest of privacy scholars. If we can’t, not only will we all be unemployed, but we will be unable to persuade policymakers to take privacy seriously. No one seems to care, they will say. Maybe we should “get over it”.
I disagree. The “privacy paradox” is only a paradox if we assume people are rational, or engage in rational disclosure decision-making. But we aren’t rational sharers. People do care about their privacy, but platforms, websites, and apps leverage design tactics that make it difficult for us to realize our privacy preferences. Designers take advantage of our cognitive limitations and trick us into disclosing more than we might want. The “privacy paradox,” therefore, isn’t a reason for law to ignore privacy. Quite the opposite. The misuse of design to manipulate us into disclosing personal information is a reason to create robust regulatory levers to reign in corporate anti-privacy predatory behavior.
The Myth of Rational Sharing
Much of the traditional economic literature on privacy and disclosure presumed rationality in disclosure. Researchers designed studies on the assumption that we decide to share after a rational weighing of pros and cons. The rational model became so pervasive that it dominates privacy law in the United States. Websites are required to provide notice to users about their data use practices so users can make their own decisions about whether they will share information, buy a product, or use a platform. This approach to the legal relationship between platforms and users is, quite appropriately, called notice and choice.
The problem is that the rational sharer is a myth, and one that social scientists debunked long ago. Acquisti and Grossklags identified some of the cognitive limitations that hinder rational choice. Acquisti, John, and Loewenstein showed that our propensity to share personal information is influenced by context, including knowledge of other people’s sharing behavior. John also showed that disclosure is correlated with a website’s aesthetics, among other contextual factors. Others have found that emotional cuing changes disclosure behavior. We are not rational in the neoclassical sense.
Cognitive Biases
Indeed, we are limited by myriad cognitive limitations that prevent us from acting rationally even if we wanted to. And we face these barriers every day. We experience anchoring when we feel elated that we negotiated to buy a car for $1000 less than the inexplicably outrageous sticker price we saw first. At the drug store, we face the problem of overchoice when we try to buy a tube of toothpaste from shelves with hundreds of toothpaste options, and we give up and take the one we’ve always used when we find it too difficult to make a real choice. We exhibit temporal myopia when we make significant commitments way in the future, whether it’s agreeing to write a book due in a year or promising we’d help someone move out of a 4-story walk up when their lease is up. We are constrained by hyperbolic discounting when we inadequately appreciate the long-term, hypothetical future risks of poor oral care and fail to make regular dental appointments. And we buy more high fat, high calorie bacon when its label says “gluten free” (even though bacon is naturally gluten free) because it was framed as a positive for healthy living.
These barriers are recreated and in some cases metastasized online. We are anchored by what we see others sharing. Facebook tells us how many of our friends liked or commented on a post immediately before and immediately after the post itself, triggering our social itches and anchoring our expectations of when it’s safe to share. Notably, Facebook does not tell us how and when those friends use privacy settings and other tools to restrict the spread of information, thus creating a one-way ratchet goading us into sharing more.
Websites also routinely frame privacy choices as “disclosure is good” and “privacy is bad.” In the run up the effective date of Europe’s General Data Protection Regulation, Google framed the choice about opting for behavioral advertising cookies as one that will either enhance the user experience or “diminish functionality.” When framed that way, we often choose the positive option, even if it goes against our rational privacy preferences.
Platforms and apps take advantage of the problem of overchoice by inundating us with consent options. They do so in part because they know we can’t adequately handle the hundreds of choices we have to make about cookies, location tracking, registrations, updates, and data transfers on the hundreds of apps and websites we use. And because many of us think it’s impossible to protect our privacy online, we give up, stop making these consent choices, and the disclosure-oriented defaults stand.
And because platforms know that we are bad at balancing potential future risks against current rewards, they offer us meager benefits in exchange for the ability to extract trillions of dollars from their users.
The Power of Design
Even if these cognitive barriers didn’t exist, tech companies take advantage of their power over design to manipulate us into sharing.
It’s hard to underestimate the awesome power of design. If you create a built environment, online or otherwise, you can direct, manage, and even predetermine behavior. That was one of Don Norman’s central conclusions in his famous book, The Design of Everyday Things. Science and technology studies (STS) scholars have made similar points. More recently, privacy scholars have used STS research and Norman’s insights to explain how digital platforms control, constrain, and manipulate our behavior. As Woodrow Hartzog has noted, “[t]he realities of technology at scale mean that the services we use must necessarily be built in a way that constraints our choices.” In other words, we can only click on the buttons or select the options presented to us; we can only opt-out of the options from which a website allows us to opt-out.
This doesn’t happen by accident. When the design of online environments is controlled by an industry whose business model is based on gathering information about us and dominated by class of engineers that has long had a dim view of privacy, platforms are designed to suppress our privacy instincts.
Online platforms are actively manipulating us into sharing more than we want by leveraging so-called “dark patterns” in platform design. A group of Princeton scholars working in this area define dark patterns as “interface design choices that benefit an online service by coercing, steering, or deceiving users into making decisions that, if fully informed and capable of selecting alternatives, they might not make.” These “dark patterns” are increasingly common. Research has shown that dark patterns confuse users by asking questions in ways nonexperts cannot understand, obfuscate by hiding interface elements that could help users protect their privacy, require registration and associated disclosures in order to access functionality, and hide malicious behavior in the abyss of legalese privacy policies. Dark patterns also make disclosure “irresistible” by connecting information sharing to in-app benefits. In these and other ways, designers intentionally make it difficult for users to effectuate their privacy preferences.
It should come as no surprise, then, that our disclosure behavior doesn’t always match our privacy preferences. Platform design, which triggers certain cognitive barriers, stand in the way.
This raises an important question: What to do about it? The Federal Trade Commission (FTC), the de facto privacy regulator in the United States, has been slow to recognize both the manipulative power of design and the anti-privacy and anti-competitive function of dark patterns. That is in part due to the lack of experts on the FTC’s staff and structural weaknesses built in to the FTC after years of neoliberal deregulation and regulatory paranoia. Scholars from law, sociology, computer science, and engineering must work with regulators to get them up to speed; indeed, they need to be on the FTC’s staff, as well. So informed, any regulator actually interested in constraining technology’s excesses can root out and challenge the use of some manipulative dark patterns as “unfair and deceptive” business practices. And state or federal privacy legislation should include fiduciary obligations for data collectors, which would, at a minimum, make it unlawful for platforms to enrich themselves while harming us and damaging our interests. That would include the manipulative use of dark patterns to goad us into disclosure against our interests. Only this kind of real structural change can even begin to address an asymmetrical digital economy built on the back of lies, deception, and manipulation.