In April, Elon Musk wanted to buy Twitter to save free speech. Now he’s apparently buying Twitter again, and it raises questions about what exactly he meant.
Much of this attention has been focused on Twitter’s moderation policies, particularly whether it would let people like former President Donald Trump back on the platform. But some of Twitter’s most consistent contributions in that area don’t appear on the platform itself, but take place in ongoing lawsuits over privacy, anonymity, and liability. And just before Musk made his latest offer, Twitter dramatically increased the stakes on one of those fights.
I really don’t know what Elon plans to do with Twitter’s on-site moderation policy. On the one hand, he hates banning people; he has suggested that he would bring Trump back and be lax about things like disinformation. On the other hand, he wants to make Twitter profitable and maybe more like WeChat, which means wiping a lot of offensive (or just plain annoying) content out of sight. There is a proposal from Axel Springer CEO Mathias Döpfner in his text logs that simply reads, “Step 1.) Resolve Free Speech.” Despite all of Musk’s lofty talk of a modern city square, his most common complaints are spammers and bots, and if he does, that would mean less, not more, speech on Twitter.
However, Twitter has been one of the internet companies most consistently defending itself against criminal actions that make people less likely to express their opinions online for years. It has taken on a role that Musk could easily leave, especially with his companies’ many entanglements with the government. And that new risk comes just as Twitter is gearing up for a Supreme Court showdown that could affect people across the internet.
Jack Dorsey defended a fundamental internet law while his Google and Facebook counterparts were ambiguous
Like Mike Masnick from tech dirt noted following Musk’s initial takeover bid, Twitter has repeatedly fought to prevent users’ personal data from being transferred to law enforcement, even as other web platforms have gone bankrupt. In 2020, former Twitter CEO Jack Dorsey was the only “Big Tech” leader to lead a candid defense of CDA Section 230 before Congress, warning that the law was a bedrock of online communications — while Alphabet’s Sundar Pichai looked timid. urging lawmakers to exercise caution, and Facebook (now Meta) CEO Mark Zuckerberg threw it under the bus completely.
Twitter doesn’t resist everyone law enforcement requests. It has followed rules against such things as hate speech in European countries, the blocking of Nazis and other far-right accounts within those markets. It did this even in the early ’10s, during its bid to be the “Free Speech Party for Freedom of Speech.” More recently, it has consulted with US health authorities about the removal of COVID-19 disinformation, although the actual removal of content was voluntary. And its defense is self-interest to some extent – most companies don’t want to be regulated or give up data!
But right now, regardless of its motives, Twitter is embroiled in a particularly consistent legal dispute. On Monday, the Supreme Court heard a few cases that will weigh down the liability of sites hosting illegal content. One is a long-running case against Google, alleging that its YouTube recommendation algorithms do not fall under Section 230. The other is a lawsuit against Twitter, alleging it violated the anti-terrorism law by not removing enough extremist content from the website. Place. (Notably, while Google will defend itself against an appeal, Twitter has proactively petitioned the Supreme Court in case Google loses.)
The case Twitter is fighting will have more consequences than tech giants
These cases don’t just affect tech giants. Google’s case could change the way we think about online legal protection. While it’s been framed around the company allegedly pushing terrorist propaganda with a specific kind of recommendation system, the court could rule that “algorithms” refer to more general search and sort systems — and a ruling would likely cover every app and website. , regardless of size .
The Twitter decision is more limited and specifically addresses sanctions laws. But the Supreme Court will decide how aggressively services should work to remove illegal content — be it on Twitter or elsewhere. In the words from Twitter’s lawyers, are sites “liable for aiding and inciting an act of international terrorism because they have provided generic, publicly available services to billions of users, allegedly including some supporters of ISIS?” Those services are social networks in the case of Twitter, but the answer could plausibly apply to almost: each tool that puts people online.
Musk has stated that he is largely unconcerned about legal censorship and says democratic governments should choose what they consider lawful. But the interpretation of those laws is up to the courts. If he decides Twitter’s case isn’t worth fighting there, it could also lead to a crackdown on legal material, as companies would have an incentive to remove anything that raises too many red flags. (On a purely mercenary level, this could turn out badly for some of Musk’s right-wing fans — there’s growing pressure to classify European far-right groups as terrorists, and that could culminate in a crackdown on anything that smacks of supporting their cause. ) And this will almost certainly not be the last dispute; Among other things, Texas and Florida just launched a massive battle over banning social media moderation.
It’s the kind of fight anyone who’s invested in online speech might appreciate — and in the coming months, we may find out if Musk really fits that bill.