So far, the biggest perceived effect of the most important data-privacy law ever has been a sharp increase in emails from social networks and web services alerting me that those endless contractual walls of text you thoughtlessly click OKAY on — privacy policies, data policies, and/or terms of service — will change, most of them conspicuously on the same date, May 25, 2018. Almost none of them mention what, exactly, has led to this mass update: the European Union’s General Data Protection Regulation, or GDPR.
GDPR becomes law, as keen deductive minds might gather, on May 25, two years after it was adopted by the European Parliament and just a few months after Facebook’s Cambridge Analytica scandal turned data privacy into a top issue for every ambitious politician and excitable journalist in the U.S. and Europe. Its expansive scope will force corporations to change the way they do business; if it works well, it will likely serve as a model for much-needed privacy legislation in the U.S. And even if Congress fails to follow Europe’s lead, the law could potentially shift the balance of power online. Somewhere inside the emails that I and millions of others have reflexively ignored is the beginning of a process that will transform the face of the internet.
What does the GDPR do? First, it creates a set of legal responsibilities for data-gathering and data-processing companies, and second, it creates rights around personal data. Those rights protect anyone geographically within the E.U. or anyone outside it whose data is being harvested or processed by any company established in the E.U. The GDPR effectively turns the global nature of the internet to its advantage: There are not many internet companies that have no offices or employees or users somewhere in Europe.
Maybe the most important of tech companies’ new responsibilities is the GDPR’s insistence that data harvesters obtain “specific, informed and unambiguous” consent from users. That is, websites and apps now need to make it very clear that they want to harvest your data and why — e.g., “so we can target ads more efficiently” or “to sell it to third parties” — and they need to make it easy for users to say no: So-called dark patterns that passively compel consent, like pre-checked boxes, are explicitly banned. The GDPR also mandates that protecting users’ data be a fundamental concern in the design of any new products and that those products must be subject to privacy testing even in early stages.
Some of the rights users are guaranteed are simply the inverse of these responsibilities; for example, users have a right to be clearly and intelligibly informed when their data is being collected. Others, like the right to a copy of your data, are designed to give users more control over their digital selves. Some of the rights could have a profound impact, like the “right to erasure,” which gives users the power to demand collected data be deleted from companies’ systems, and a family of rights related to “automated individual decision-making” that protect users from the vagaries of algorithmic decisions. If, say, a GDPR-protected user applies for a bank loan online and is denied based on the automated, data-based calculations of the bank’s system, he or she has the right to contest that decision, to demand human intervention, and, most important, to insist on regular audits of those algorithms. (It’s significant that the bank is also obligated to make the applicant aware of those rights.)
This all seems fairly sensible and in the public interest, which means of course that many of the companies that have built hidden empires quietly slurping up, packaging, manipulating, and trading user data hate it. Facebook is already pushing the boundaries of what’s acceptable: Its new terms-of-service dialogue features a big button that says I ACCEPT. What if you don’t accept? Well, if you squint you can see a tiny little link that says SEE YOUR OPTIONS — a clear violation of the spirit, if not the letter, of GDPR’s anti-dark-patterns clause.
But Facebook has it comparatively easy; at least it — and other consumer-web giants like Google, Spotify, and Amazon — has a prior relationship with the users whose consent it needs to obtain. Little-known companies behind widely loathed internet practices, like the third-party ad-retargeting firms responsible for, say, a shoe ad following you across the internet, now have to obtain explicit permission from each user to keep serving up ads. The effect of the consent laws will be a bit like exposing a colony of termites that have been living in your home and are then forced to introduce themselves to you, one by one, and ask if they can please stay.
For companies whose entire business model was users not really understanding the entire business model, the cost of direct sunlight may just be too high. Unroll.me, a company that offers to automatically declutter your in-box (while, uh, selling the insight it gleans from your data to companies like Uber), announced that it will no longer serve E.U. customers.
If enough companies follow this lead, one practical effect might be a split internet, with one set of GDPR-compliant websites and services for the E.U. and another set with a somewhat more, let’s say, relaxed attitude toward data for the rest of the world. But even a loosely enforced GDPR creates conditions for improving privacy protections beyond Europe. Facebook, for example, has already said it will extend GDPR-level protections to all of its users — if they opt in to them.
This doesn’t mean that a GDPR-style law is unnecessary in the U.S. As written, GDPR would be difficult to implement here — the “right to erasure” could run afoul of the First Amendment, for starters — but many of its key concepts would be easy to transfer. Whether there’s political will to do so is a slightly different question. In November, Californians will vote on a ballot initiative that would extend many of the same protections. It’s encouraging to supporters that GDPR was frequently referenced in questions to Mark Zuckerberg when he testified before Congress — but so far no federal bills with GDPR’s scope or teeth have been proposed.
That’s not surprising. The culture of the internet has always maintained a justifiable wariness toward the idea of national sovereignty, which is thought to impinge on the freedom of the global network. But GDPR might be better thought of as an assertion of user sovereignty — an insistence that users have the right to control what they produce, and how they’re understood, on the network.
And if GDPR really does herald a shift in the balance of power on the internet, it’s not without risk. The corporate sovereigns may have created a depressing, buggy, heavily surveilled internet, but it’s also an enormously profitable one, and the revenue structure and incentives it encourages are now deeply embedded in the tech industry, not to mention the economy at large. The industry group Interactive Advertising Bureau claims digital ads add $625 billion to Europe’s economy and serve as the underlying business model for the bulk of Europe’s publications. A strictly enforced GDPR would raise the cost of doing business and limit the potential revenue — the IAB says data-driven ads are worth three times as much as non-targeted ads.
If GDPR is taken seriously by regulators and corporations, a lot of that money might disappear. Some of the publishers reliant on digital advertising might, too, and not just the clickbait-y ones. Drained of a significant portion of its ad money, the internet could be less of an economic engine for Europe. But it wouldn’t necessarily be worse. There’s a potential utopian outcome, too: Reliant on subscriptions rather than advertising, publishers might produce better journalism. More aware of the cost of free social networks, users might find themselves paying new competitors to Facebook or Twitter. And freed of the extra load of third-party ad tech, websites might even be faster. Certainly, there would be less auto-playing video.
*This article appears in the May 14, 2018, issue of New York Magazine. Subscribe Now!