Policy —

Tech companies declare war on hate speech—and conservatives are worried

In light of Charlottesville, Silicon Valley revisits its absolute approach to free speech.

Cloudflare CEO Matthew Prince at a 2014 TechCrunch Disrupt conference in London.
Enlarge / Cloudflare CEO Matthew Prince at a 2014 TechCrunch Disrupt conference in London.

"One of the greatest strengths of the United States is a belief that speech, particularly political speech, is sacred," wrote Cloudflare CEO Matthew Prince in a 2013 blog post. Both then and now, the CDN and Web security company has protected websites from denial-of-service attacks that aim to drown out targets with fake traffic. Prince vowed that this service would be available to anyone who wanted it.

"There will be things on our network that make us uncomfortable," Prince wrote. But "we will continue to abide by the law, serve all customers, and hold consistently to a belief that our proper role is not that of Internet censor."

Recently, this stance put Prince in a really uncomfortable position. Cloudflare was providing service to the Daily Stormer, a neo-Nazi website that published an article trashing Heather Heyer, a victim of lethal violence during the Charlottesville protests. So under pressure from anti-racism activists, Cloudflare dropped the hate site as a customer. The move caused Daily Stormer to go down for more than 24 hours.

Perhaps this situation sounds familiar. "We uphold the ideal of free speech on reddit as much as possible," Reddit said on the official company blog in 2014. But within months, the site started banning communities devoted to racism and misogyny. Twitter similarly declared in 2012 that the company represented "the free speech wing of the free speech party." But more recently it has ramped up efforts to combat harassment on its platform, notably banning right-wing Internet troll Milo Yiannopoulos.

Long after the development of the modern Web, free speech purists have traditionally drawn a sharp distinction between words and actions. As Prince put it in 2013, "a website is speech. It is not a bomb." But as the Internet becomes ever more deeply intertwined into our daily lives, that distinction doesn't always seem so clear-cut. Online mobs have found ways to cause serious offline harms: stealing and publishing private information, harassing employers and family members, flooding social media accounts with threats and vitriol, and so forth. Women and minorities often bear the brunt of these tactics.

Simultaneously, a shifting political climate has changed how people think about online hate speech. Before Donald Trump's election—and before racist gunman Dylann Roof murdered nine black people in Charleston last year—it was easier to dismiss sites like the Daily Stormer. Yes, these were watering holes for evil ideology, but they fundamentally felt like powerless kooks. Nowadays, more people are taking words and threats from extremist groups seriously. Liberal activists have mobilized against these groups, and they've found a receptive audience among technology leaders.

So while it received unusually high attention given the events in Charlottesville, Cloudflare's action is merely the latest example in a broader trend that has been underway for the last few years. Technology companies that once viewed themselves as zealous defenders of free speech have started carving out a big exception for hate speech and online harassment.

Almost everyone can agree that the Daily Stormer is an odious hate site, but critics have warned of a slippery slope. They say that companies are setting a dangerous precedent that could lead to censorship of less offensive speech. And conservatives in particular worry that technology companies will expand the definition of hate speech to cover conservative viewpoints on issues like illegal immigration and gay marriage. The question is how far will this trend go?

The "Unite the Right" fiasco was not the first time tech companies have been challenged to stand up to hate.
Enlarge / The "Unite the Right" fiasco was not the first time tech companies have been challenged to stand up to hate.
Chip Somodevilla/Getty Images

A sea change on harassment

In the early years of the Internet, debates over free speech often focused on types of speech that technology elites didn't find so troubling. Way back in 1996, webmasters blacked out their sites to protest the Communications Decency Act, which would have restricted online pornography. Key parts of the law were ultimately struck down by the Supreme Court.

In the 2000s, fights over online free speech often focused on Internet piracy. Big movie and music publishers urged technology companies to do more about the problem. But major technology companies often refused to go beyond what the law required of them, and they vigorously opposed the Stop Online Piracy Act, which would have created an official government blacklist of sites promoting piracy.

In another notable situation, many technologists cheered on Wikileaks as it fought US government efforts to stop the publication of classified documents leaked by Chelsea Manning. In 2010, Wikileaks briefly lost its .org domain name as a result of US government pressure, but the activist group was able to stay online by switching to alternative domains like WikiLeaks.ch.

And while hate speech activists didn't always find a sympathetic audience in Silicon Valley, in the early years of the Internet, technology leaders often insisted that racist and misogynistic content was just another part of the Internet's open, free-wheeling culture. As one famous 1996 statement put it: "all the sentiments and expressions of humanity, from the debasing to the angelic, are parts of a seamless whole, the global conversation of bits."

But a major inflection point came on August 31, 2014.

That day, hackers infamously released a trove of nude selfies stolen from the smartphones of celebrities. The theft of these photographs was clearly illegal, and their publication arguably violated copyright laws. And some of the celebrity victims were wealthy enough to hire good lawyers, who launched a concerted legal campaign to scrub the stolen photos from the Web.

In response, voyeuristic creeps across the Internet started congregating on the subreddit "/r/thefappening." They traded links to new copies of the photos, helping each other stay one step ahead of the censors. When people complained to Reddit management, Reddit refused to do anything to stop them, pointing to the site's strong free-speech policies.

"Current US law does not prohibit linking to stolen materials," CEO Yishan Wong wrote. "We understand the harm that misusing our site does to the victims of this theft, and we deeply sympathize. Having said that, we are unlikely to make changes to our existing site content policies in response to this specific event."

This position proved untenable. Under intense pressure from the media, lawyers, and some Reddit users, Reddit soon changed course and banned the controversial section.

This warning page greeted people to a certain distasteful subreddit.
Enlarge / This warning page greeted people to a certain distasteful subreddit.
reddit

Reddit initially insisted that its free-speech policies hadn't changed. Instead, Reddit stated that it banned /r/thefappening only after learning that some of the stolen photos were taken while subjects were underage, potentially opening the site up to child pornography charges. But the decision in fact became a precursor for broader changes on the site. One year later in 2015, Reddit banned a growing list of openly racist or otherwise hateful subreddits with names like "coontown" and "rapingwomen."

Today, Reddit remains one of the edgier sites on the Internet. But it's no longer the kind of anything-goes free-for-all it was until 2014.

Around the same time Reddit was dealing with r/thefappening, Twitter—the site that used to brag about representing the "free speech wing of the free speech party"—came under pressure for rampant harassment suffered by users on its site. "We suck at dealing with abuse and trolls on the platform and we've sucked at it for years," Twitter's then-CEO Dick Costolo admitted in early 2015.

Since then, Twitter has become much more aggressive about booting users who engage in or encourage abusive, mob-like behavior. Internet trolls like Yiannopoulos wielded the influence of their large Twitter followings to orchestrate harassment campaigns against other Twitter users, such as Ghostbusters star Leslie Jones. Last year, Twitter banned Yiannopoulos from the platform, and it has done the same for a number of others who have engaged in similar behavior.

Fundamentally, this shift in company policies has been driven by the increasingly deep integration of the Internet into people's daily lives. Over the last decade, the growth of the Internet and the rise of the smartphone have made online behavior that had once been relatively easy to ignore start to seem intolerable.

Being on Twitter has become a near-requirement for success in certain professions. And as online platforms gained users, the potential for online harassment to reach into people's offline lives grew. The Internet's grassroots troll communities are now large and well-organized enough that they can make targets' lives hell for weeks or even months on end.

And so people in the real world—including powerful people like civil rights leaders and celebrities who had private photos stolen—started exerting a lot of pressure on prominent technology platforms to crack down on harassment and speech that demeaned people based on their race or gender. In the last few years, they've been getting results.

Those results haven't always been pretty. While crackdowns on far-right speech have gotten most of the attention in the last week, users across the political spectrum have long criticized technology companies for their seemingly arbitrary and non-transparent processes for moderating content on their platforms. On several occasions, civil rights activists have had their Facebook accounts suspended after they quoted and criticized racist speech sent to them by others. Users have criticized YouTube for classifying video discussions of LGBT issues as sexually explicit even though nothing sexually explicit happened in the videos.

In most of these cases, platforms apologized for these decisions and re-instated blocked content. But critics argue that companies need to do more to prevent these problems from happening in the first place—including hiring more moderators and establishing more transparent processes for appealing takedown decisions.

Channel Ars Technica