Mr. Obama, Tear Down This Liability Shield

Online trolls have launched another barrage of attacks in the strange, petty little war over “ethics in journalism” we call GamerGate.

Perennial troll targets Anita Sarkeesian and Zoe Quinn caused the latest escalation by testifying to the UN about the toxic effects of online harassment and the need for something to be done about it.

As a result, we’ve seen a wave of bizarre, hysterical conspiracy theories claiming that very soon the Internet will become a centralized service ruled by UN censors with an iron fist. (Something that is logistically, technologically and legally impossible — and if it were possible, would not suddenly happen overnight because of GamerGate.)

It didn’t help that the United Nations was characteristically less-than-competent at drafting a paper to address the issue of online harassment.

Most of the useful things Sarkeesian and Quinn said at the panel were undermined by a slipshod and poorly-thought-out report presented alongside it.

Again, the idea that the US government — or any other government — would have the legal authority to serve as a centralized “licensing board” with the power to shut down websites they deem to be abusive is legally and ethically troubling and, more importantly, utterly unworkable.

It’s nearly impossible to apply the same logic by which the FCC regulates the “public airwaves” to Internet services, which exist on privately owned servers and clients that communicate over privately owned cables, phone lines and fiber optics.

It’s nearly impossible to apply the same logic by which the FCC regulates the “public airwaves” to Internet services, which exist on privately owned servers and clients that communicate over privately owned cables, phone lines and fiber optics.

 

And after revelations about the shady actions the federal government is already taking with our data, I doubt any of us are in a hurry to grant the federal government direct power over which websites stay up or which ones shut down.

But by attacking this overly radical proposal in the article I linked above, Caitlin Dewey at the Washington Post conflates two different issues. Contrary to what free-speech absolutist organizations like Wikileaks claim, we do not face a binary choice between creating a centralized regulatory authority and shrugging our shoulders and saying the laissez-faire cesspit that is the modern Internet is none of our concern.

We have, here in the United States, a system by which wronged parties can seek redress from those who wronged them, and those who willfully enabled that wrong, without proactive control by government bureaucrats. It’s one that even ardent libertarians imagine as being part of how their ideal “small government” would work. And it’s a highly American tradition: one that’s been identified as central to American culture since the days of Alexis de Tocqueville.

I’m talking, of course, about lawsuits. Civil litigation. Bringing in the lawyers.

scales of justice

Right now you can’t sue digital platforms for enabling harassment on their services, even if they enable harassment through flagrant, willful neglect. If your harasser is able to take fairly basic steps to keep himself anonymous — and if the platform he chooses enables and enforces that anonymity — then there is literally nothing you or the government can do, even if his actions rise to the level of major crimes like attempted murder.

Closing this loophole wouldn’t require giving the Internet “special treatment” compared to other forms of communication. Nor would it require a sudden, major deviation from the standards of free speech most of the developed world respects.

It would require the exact opposite — it would require the United States to remove a law that specifically mandates special treatment for Internet service providers and platforms that no other communications medium has.

Far from turning us into China or North Korea, it would bring the United States into line with every other developed country in the world, including our close allies in Canada and the UK. It would remove the competitive advantage that keeps most social media companies in the US, despite the talent and capital in other nations. This advantage is a law that makes us a liability haven.

With the rapid, massive scaling-up involved in Web 2.0, social media companies have decided as long as they can’t get sued, the costs of enforcing their terms of service outweigh the benefits.

 

The law is called Section 230 of the Communications Decency Act. It was passed in 1996, when the Internet was still a novelty rather than an integral element of commerce and daily communication for pretty much all Americans. It was passed at a time when “doxing” and “swatting” had in fact happened before, but were not yet known by those names thanks to the practice only being relevant to a small community of self-identified nerds.

It is long past time it was repealed.

Section 230 of the CDA, paradoxically, initially existed to encourage online platforms to be proactive about filtering, blocking and sanitizing content.

It was drafted in response to a past court case wherein the online service Prodigy was found to meet the definition of a “publisher” because they were capable of taking down specific message board posts and had in the past done so –therefore they were held to be liable for a libelous post that they failed to take down.

Section 230 was added to the CDA as part of the CDA’s overall goal to “clean up” the Internet from obscene materials. By making it “safe” for online services to enable filtering or blocking without creating legal liability for everything posted on the site, they hoped to spur the advancement of content filtering technologies, reasoning that keeping the bad stuff off your site could only be in the long run good for business.

In a great historical irony, most of the CDA was overturned as unconstitutional, and the part that remains, Section 230, has taken the role of preventing online services from cleaning up their content. Because, it turns out that harassing, destructive content is profitable. With the rapid, massive scaling-up involved in Web 2.0, social media companies have decided as long as they can’t get sued, the costs of enforcing their terms of service outweigh the benefits.

social media

No one in 1996 predicted the 2000s would see the massive influx of “user generated content” that defines Web 2.0. No one foresaw the incredible profitability of a business model based on creating no content at all of your own, but instead monetizing clicks on your users talking to each other.

I’m sure the judge who decided the Zeran v. AOL case never expected that Mr. Zeran’s story would soon become an endemic feature of life in the 2010s.

A troll provoked mass harassment of a random individual through libel and doxing and AOL was clearly, willfully negligent in refusing to do anything proactive about the trolling, simply because they did not take the time or energy to track down the anonymous troll and disable his account.

At the time, this probably seemed like one of those weird, wacky “only on the Internet” stories that served as a cautionary tale that people shouldn’t “go on the Internet” unless they’re “Internet-savvy”. At the time, preserving the principle of Section 230 must’ve seemed like the important thing.

Now people get doxed every day, and every day SWAT teams are weaponized to destroy property and put people’s lives at risk.

Now “Don’t go on the Internet” is as ridiculous advice as “Don’t use the telephone” would’ve been in 1996, or “Don’t use the mail” would’ve been in 1916 — to sever oneself from Internet services would mean severing oneself from where most social interaction and economic activity takes place.

Social media companies, however much their marketing departments may instruct them to talk a big game about being anti-abuse, have an active financial disincentive to actually be anti-abuse.

 

We have companies that, whatever their intentions were at first, found that the way to attract big bucks from investors was to demonstrate exponential user growth as early and as rapidly as possible. This means that, unlike the days when online services made money by directly getting users to pay for things, more posts, more tweets, more clicks — more “engagement” — is directly profitable for social media companies no matter what the nature of the engagement is.

Abuse can never have direct, immediate costs as long as the possibility of lawsuits are off the table, and truly robust anti-abuse initiatives break the myth of tech startups being exponential money-printing machines.

The wonderful “scalability” of writing a little bit of code and getting a whole ton of “engagement” in return breaks down once you have serious anti-abuse measures because unfortunately policing against abuse can still only be done by real human beings. Hence “community management” becomes a job that’s as understaffed and underpaid as the company reasonably feels like they can get away with.

And now we clearly see that harassment has what anti-discrimination lawyers call a “disparate impact.”

When content curation is more about keeping up superficial appearances than avoiding genuine liability, it can be as shallow as you want it to be — hence the infamously disparate response times between a celebrity Facebook or Twitter user’s complaints being addressed than a mere mortal.

White men, who made up most of the visible userbase in 1996, came into Web 2.0 with a sense of intrinsically belonging there; women and minorities, by contrast, get treated as outsiders, blasted with far worse harassment for speaking out and more likely to be brushed off when they complain about it.

To borrow another phrase from discrimination law, the Internet is a fundamentally “hostile work environment” for women and minorities who spend time online, but there’s no entity who can be held responsible for it.

Dealing with abuse becomes part of the hidden tax that anyone who tries to work in media and tech as a non-white-dude ends up paying in time and energy. To switch back into tech jargon, online abuse has become an unending series of denial-of-service attacks aimed at humans rather than machines, and disproportionately targeting women. (To say nothing of literal denial-of-service attacks.)

382743288_5e4123c889_o

I remember watching in 2007 as one of the first high-profile harassment lawsuits against anonymous trolls on the “modern” Internet unfolded — the anon troll board, AutoAdmit, catered to an audience of law students and primarily targeted female law students for harassment.

The board was notorious for being a place for trolls to gather and talk shit about people they chose to target for the explicit purpose of ruining their reputation and their lives. The admins had been specifically informed of and were well aware of the damage the abusive posts were doing, but refused to take them down, and did not cooperate at all in seeking to reveal the real identities behind the abusers’ pseudonyms.

If there had been any possibility of an “exception” in case law to the interpretation of Section 230 as a catch-all liability shield, that would have been the time. But it didn’t happen. Section 230 held firm. The admins were dropped from the suit. Afterwards the lawsuit largely fizzled thanks to it being exceedingly difficult to take someone to court when all you know about them is they posted under the handle “HitlerHitlerHitler.”

Since then, people have only gotten bolder.

We now have sites like 8chan — refuges for people who find even the notorious 4chan too censorious for them — that openly provide cover for users who dox federal judges. We have violent terrorist organizations like ISIS openly using the Internet as a recruitment tool. We have major sites like Reddit proudly proclaiming themselves to lack not just any legal but also any moral obligation to not participate in a sex crime.

It can’t go on like this. The EFF, an organization I generally respect, put forward a spirited defense of Section 230 in 2012, saying that without Section 230 those wonderful viral-growth services like Facebook, Twitter, and YouTube couldn’t exist in their current form. It goes on to argue that individual bloggers are protected by Section 230 from liability for their comments sections.

It ignores that Facebook, Twitter and YouTube are excellent tools for stalking, harassment, defamation and all manner of harm, that lives have been lost, careers destroyed, money thrown down the drain because of unaccountable users using unaccountable platforms. It ignores that the whole unquestioned “tradition” of the unmoderated comments section has led to a tradition of trolling, vitriol and lies that make the Internet a worse place and make bloggers who host them worse off.

The EFF warns us how much costlier the Internet would be if we had to pay for lawyers, content managers and filtering tools all the way at the very beginning of a social media startup’s lifespan to protect against potentially fatal lawsuits. They raise the spectre of the bounteous wonderland of a Web 2.0 filled with “free” content going away, being replaced by subscription fees or microtransactions.

The Web as it currently exists is already costly, tremendously so. The cost is just mostly borne by people the tech world doesn’t regard as particularly valuable.

 

I would reply that the Web as it currently exists is already costly, tremendously so. The cost is just mostly borne by people the tech world doesn’t regard as particularly valuable — the teenager bullied into suicide; the activist doxed and forced to flee her homethe lawyer whose professional reputation is ruined by libelthe idealistic tech visionary who abandons her career because the daily emotional grind is eventually too much to take.

Section 230 is nothing more or less than an open declaration by the government that it is unfortunate and vulnerable users who have to bear these costs, and — unlike any other kind of publisher, unlike people who print books or print newspapers or air TV shows — the people who dole out the power to instantly publish anything they want online should bear no responsibility or risk. Even if it’s revenge porn, even if it’s a phone number or address, even if it’s an open death or rape threat.

I’m no libertarian, but I was taught that a core value of libertarianism is “personal responsibility”, which is empowering individuals to seek redress against people who harm them through the courts or through the market, rather than relying on government regulators preemptively keeping them safe.

I’m not calling for a new law to be passed or a new agency to be created. I’m calling for a law to be repealed. I’m not calling for Internet users to be singled out. I’m calling for the Internet to not be singled out, for the artificial and stupid shield between the Internet and the “real world” that enables the Internet to be a lawyer-free zone and thus a massive unaccountable sewer of abuse to be torn down.

The year is 2015, and for over a decade now things have been going from bad to worse. How much worse do they have to get before we act?

Mr. Obama: Tear down this shield.