“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Section 230 of the 1996 Communications Decency Act has been described as ‘The Twenty-Six Words That Created The Internet’.
But there now seems to be a consensus among politicians of left and right that social media platforms should be liable for the content their users post just as a newspaper would be legally responsible for a columnist’s libellous screed.
In his last few weeks in office, Donald Trump made revoking Section 230, a key priority. At one point, he even vetoed the National Defense Authorization Act to try to force the hand of congress. Unsurprisingly, this stunt failed.
But the threat to Section 230 remains as Joe Biden isn’t a fan either. Last year, he said “Section 230 should be revoked, immediately should be revoked, number one. For Zuckerberg and other platforms.”
But treating platforms as publishers is an idea so wrongheaded that only a politician could come up with it.
There are a bunch of good reasons to keep Section 230-style protections. For one, competition. Google, Facebook and Twitter might be able to hire an army of moderators and develop sophisticated algorithms to take down most offending content before it is even uploaded, but a startup competitor experiencing a surge in users might not.
Alternatively, platforms may decide to become anything goes free-for-alls. If promoting or moderating content leads to platforms being treated as publishers, then they may withdraw from moderation altogether. In fact, part of the original rationale for Section 230 was to allow platforms to have greater leeway to moderate sexual content.
The Electronic Frontier Foundation, an organisation who shaped the case law around Section 230 note, state: “The point of 230 was to encourage active moderation to remove sexual content, allowing services to compete with one another based on the types of user content they wanted to host.”
Some moderation is clearly valuable. Where possible, the vast majority end up choosing moderated spaces over unmoderated spaces. I’m reminded of Groucho Marx’s saying “I refuse to join any club that would have me as a member.” In principle, I’m opposed to censorship and in favour of the free and robust exchange of ideas. But if your platform’s key selling point is that you do not censor content then your platform will end up full of undesirable content that other platforms would remove.
Scott Alexander puts it neatly: “There’s an unfortunate corollary to this, which is that if you try to create a libertarian paradise, you will attract three deeply virtuous people with a strong commitment to the principle of universal freedom, plus millions of scoundrels. Declare that you’re going to stop holding witch hunts, and your coalition is certain to include more than its share of witches.”
I’m more than happy for unmoderated spaces to exist, but I doubt I’d find them enjoyable places to spend time. There’s been an underreported trend, first observed by Sam Bowman, of people retreating from public forums like Twitter to private chat groups. I think the rise of semi-closed spaces such as invite-only Discords or Slacks is a response to the demand for higher quality conversations and an escape from the playground bullying style pile-ons that can happen on Twitter.
But while there is a place for some moderation, the strongest case against repealing Section 230 protections is that tech platforms are terrible at moderation, and repeal will entail much, much more content regulation. If you make platforms liable for all the content they host, then the algorithms they employ will inevitably end up taking down legitimate content.
In fact, they already are. Reader, I was one of them.
For the past few hours, I’ve been in Twitter jail for promoting misinformation related to COVID-19. My crime? Comparing a housing demand denier to an anti-vaxxer. To compound the irony, two tweets earlier I suggested the Conservatives should remove the whip from Sir Desmond Swayne on the grounds that he’s expressed encouragement to anti-vaxx groups.
I can’t really be mad at having my account locked. I feel pretty confident that no human was involved with this decision. The problem is that most moderation will inevitably be done by algorithms.
This is unavoidable when every minute 400 hours of video are uploaded to Youtube every minute and every day 500 million tweets are posted.
Algorithms, however, struggle to pick up on nuance, sarcasm, and irony. They’re forced to look to focus on keywords or phrases, such as “Vaccines contain microchip trackers”, ignoring the context.
Source: XKCD
It reminds me of when I used to post on the Football Manager forum, which had a ban on swearing and used a simple find and replace function. “S****horpe United FC'' was a common occurrence. Now, in this case, they could probably code an exception around it. But it is extremely difficult to code an exception for sarcasm or irony.
I’m far from the only case of algorithmic content regulation failing. In a paper for the Adam Smith Institute a year or so ago, I listed some more examples. But one example stands out as almost too good to be true.
In 2017, Germany passed the Network Enforcement Act (Commonly known as the NetzDG) - a law which fined sites which fail to remove obviously illegal content, including hate speech, up to €50m. A year later, a tweet written by Heiko Maas, the Justice Minister who wrote the bill, calling an anti-muslim writer an idiot was deleted by Twitter.
Algorithms are useful, and will become even more useful in the future. But when it comes to content moderation, they have a long way to go before they can do an adequate job. The simple case for keeping Section 230 is not that content moderation is always undesirable, but that it’s really, really hard.