25th - 26th SEPTEMBER 2019  |  OLYMPIA

When Limiting Online Speech to Curb Violence, We Should Be Careful

Wired 09 Aug 2019 01:00

America's ongoing problem with mass violence—and the difficulty we are having in quelling it—is causing many to call for the elimination of online forums used by the perpetrators. Long-term congressional gridlock on other solutions has many looking for new prevention strategies and new people to hold responsible.

The hope—for some it may be a belief—is that eliminating online speech forums will help prevent future violence. This is understandable. Everyone wants to live in a country where they are safe in their local stores, at festivals, and in other public places. The vile ideas driving shooters whose actions have caused unspeakable pain and loss are in plain view on 8chan, and the thought that we could just make them go away has strong appeal.



Cindy Cohn is the executive director of the Electronic Frontier Foundation.

But this is also a critical moment to look closely at what is being proposed and pay attention to the potential consequences for us all. We all rely on our internet communities for connection, information, and organizing against violence. The same mechanisms used to eliminate online forums hosting objectionable speech are all too often used to silence marginalized voices of people who have important things to say, including those drawing attention to hate and misogyny. Rules prohibiting violence have already taken down Syrian YouTube channels documenting human rights violations, while Facebook discussions by black Americans about racism they have experienced have been removed as hate speech.

Two key strategies have emerged to hold online forums responsible for violence: deplatforming and increasing the liability imposed on internet intermediaries by changing Section 230 of the Communications Decency Act (CDA). Both strategies are notable because they are not directly aimed at the perpetrators of violence, or even at others who are participating in the hateful forums. They are instead aimed at the chain of companies or nonprofits that host the speech of others. For either approach, there is reason to tread carefully.

Deplatforming is a nonlegal strategy that involves pressuring companies to stop hosting or servicing certain individuals or forums, thus removing them from the Internet entirely or making them harder to find. This strategy recognizes that everyone who speaks online is dependent on a series of intermediaries, including direct ones like Facebook or YouTube and ISPs like Comcast or Verizon. They also include indirect intermediaries further upstream from the user, such as website hosting services, domain name registrars and domain hosts, and DDoS protection services like Cloudflare, which is currently in the news for cutting off services to 8chan.

The second strategy is a legal one that would open all of the above intermediaries to potential lawsuits by modifying CDA 230. Paradoxically, this law was passed to ensure that hosts could moderate content on their sites—protecting them from liability for both taking speech down and leaving it up. CDA 230 is rightly regarded as the law that allows all of us to participate and speak out online, since few hosts could survive if they had to face potential lawsuits every time someone criticized a company (Yelp) or said something that turns out to be wrong (Reddit or Wikipedia). Regardless of its primary benefits to us as speakers, CDA 230 has become the convenient—and often mistaken—scapegoat for those angry at technology companies for any number of reasons.

Both strategies have surface appeal in response to hateful speech. It can feel viscerally good to try to shut down these forums or chase them from host to host, or to hold someone accountable even if it is for what is said by others. But once you’ve turned it on, whether through pressure or threats of lawsuits, the power to silence people doesn’t just go in one direction. The power to stop someone you hate from speaking can be used to stop speech by someone you love, or your own speech. That power will be used by those who wish to silence their political enemies, including governments and big companies around the world. In our 30 years of helping people make their voices heard online at the Electronic Frontier Foundation, we have seen how censorship reinforces power far more than it protects the powerless.

Continue reading original article...