Google and the Right to Be Forgotten

The “Right to Be Forgotten” doctrine is controversial in the United States in ways that frankly surprise most Europeans. The latter are used to the eternal complex struggle to balance the rights of individual and community, and are not used to the deference given in the U.S. to large corporations. So when a “leading French consumer group filed a class-action lawsuit” last week “accusing Google of violating the European Union’s landmark 2018 privacy rules,” this hardly raised an eyebrow in the European press, but plenty has been said about it in Google’s home country. 

The lawsuit was filed in an administrative court in Paris by UFC Que Choisir, a consumer advocacy group. It seeks a little over a thousand dollars in damages for each of 200 users. The EU rules in question are known as GDPR (General Data Protection Regulation), and they prohibit operators from publishing private data on non-newsworthy individuals. Google has responded in the media by saying its privacy controls are consumer-driven and adequate. 

Some legal scholars argue that the U.S. should follow suit, implementing a right to be forgotten doctrine, which would protect both adults and children from humiliation and bullying online. Not just because parts of the unregulated online community can, as the Connecticut Law Journal pointed out years ago, be literally violent and traumatic. Which it can. Understanding why some people believe we actually can balance privacy and freedom of speech requires at least an acknowledgment that there are good arguments for excluding “a right so broad that wrongdoers and corporations can expunge important data relevant to, for example, consumer and investor decision making.” 

This is in part a technological question. “The Internet does not have to preserve information forever,” and so proponents of a right to be forgotten are essentially saying that technological possibility needn’t determine ethical permissibility. RTBF proponents say that an egalitarian society requires the right to a private life separate from capitalism’s colonization of public life, if that’s possible. There is also a wide distinction between the kind of data append, email, and consumer marketing data long used to reach consumers privately and the detailed public dossiers of individuals created by and accessible through today’s internet giants. 

Julia Powles, researcher in law and technology at the University of Cambridge, argues that the private “sphere of memory and truth” must be kept separate from public memory in order to preserve that part of freedom that keeps egalitarian values from becoming inegalitarian hierarchies. Homeland Security NewsWire‘s Eugene Chow argues that the European rules have made life better for internet users, where Europe’s “conception of privacy” could set a model giving “Americans [ . . . ] a legal weapon to wrestle control of our digital identities.” There are limits on even robust RTBF policies, and people can’t hide information just to make their lives more convenient or escape accountability. 

On the other hand, there are good reasons to be skeptical of RTBF. It lacks clear standards, checks and balances, and as Jodie Ginsberg of Index on Censorship points out, the appeals process isn’t what Americans would expect. Speaking of one particular EU court decision, Ginsberg calls it a “flabby ruling” and argues that, even aside from the free speech questions, there are practical resource issues: “The flood of requests that would be driven to these already stretched national organisations” for RTBF status should deter people from turning the right into “a blanket invitation to censorship.” And to be sure, these requests will include public figures seeking to game the rule, such as the case of an actor requesting the removal of news articles about an affair they had with a teenager, or the politician who wanted stories of their erratic behavior wiped.

But in the final analysis, the experiences of totalitarianism in the 20th century suggest that if we can’t trust governments with our private lives, we can’t trust corporations either. Both the far right and Stalinism produced, as one scholar puts it, “shockingly tight surveillance states.”

As Jeffrey Toobin points out, it was the EU’s proximity to such totalitarianism that has led to the “promulgat[ion of] a detailed series of laws designed to protect privacy.” We really can’t trust hierarchical authority, whether it comes from the market or the ballot box, even if we have to work with such forces. So even if it’s not an EU-style RTBF, the experience of Google in Europe suggests that Americans should at least come up with some reasonable guidelines to protect the privacy of non-newsworthy data—whatever we might decide, through endless deliberation, that might mean.