When it comes to Section 230, we’ve seen a parade of embarrassingly wrong takes over the years, all sharing one consistent theme: the authors confidently opine on the law despite clearly not understanding it. Each time I think we’ve hit bottom with the most ridiculously wrong take, along comes another challenger.
This week’s is a doozy.
I don’t want to come off as harsh in critiquing these articles, but it’s exasperating. It’s very possible for the people writing these articles to actually educate themselves. And in this case in particular, at least two of the authors have published something similar before and have been called out for their factual errors, and have chosen to double down, rather than educate themselves. So if the tone of this piece sounds angry, it’s exasperation that the authors are now deliberately choosing to misrepresent reality.
I’ve written twice about Professor Allison Stanger, both times in regards to her extraordinarily confused misunderstandings about Section 230 and how it intersects with the First Amendment. It appears that she has not taken the opportunity in the interim to learn literally anything about the law. Instead, she is now taking (1) an association with Harvard’s prestigious Kennedy School to further push utter batshit nonsense disconnected from reality, and (2) sullying others’ reputations in the process.
I first wrote about her when she teamed up with infamous (and frequently wrong) curmudgeon Jaron Lanier to write a facts-optional screed against Section 230 in Wired magazine that got so much factually wrong that it was embarrassing. The key point that Stanger/Lanier claimed was that Section 230 somehow gave the internet an ad-based business model, which is not even remotely close to true. Among other things, that article confused Section 230 with the DMCA (two wholly different laws) and then tossed in a bunch of word salad about “data dignity,” a meaningless phrase.
Even weirder, the beginning of that article seems to complain that not enough content is moderated (too much bad content!), but by the end they’re complaining that too much good content is moderated. Somehow, the article suggests, if we got rid of Section 230, exactly the right kinds of content would be moderated, and somehow advertising would no longer be bad and harassment would disappear. Then they say websites should only moderate based on the First Amendment which would forbid sites from moderating a bunch of the things the article said needed moderating. I dunno, man. It made no sense.
Somehow, Stanger leveraged that absolute nonsense into a chance to appear before a congressional committee, where she falsely claimed that decentralized social media apps were the same thing as decentralized autonomous organizations. They’re wholly different things. She also told the committee that Wikipedia wouldn’t be sued without Section 230 because “their editing is done by humans who have first amendment rights.”
Which is quite an incredibly confusing thing to say. Humans with First Amendment rights still get sued all the time.
Anyway, Stanger and Lanier are back with a new article, this time published at the Harvard Kennedy School’s Ash Center for Democratic Governance and Innovation. Once again, they are completely and totally getting Section 230 twisted around to make it unrecognizable from reality.
Unfortunately, this time, they’ve dragged along Audrey Tang as a co-author. I’ve met Tang and I have tremendous respect for her. As digital minister of Taiwan, she did some amazing things to use the internet for good in the world of civic tech. She’s also spoken about the importance of the internet on free speech in Taiwan, and the importance of the open World Wide Web on democracy in Taiwan. She’s very thoughtful about the intersection of technology, speech, and law.
But she is not an expert on Section 230 or the First Amendment, and it shows in this piece.
At least this article starts with a recognition of the First Amendment, but it even gets the very basics of that wrong:
The First Amendment is often misunderstood as permitting unlimited speech. In reality, it has never protected fraud, libel, or incitement to violence. Yet Section 230, in its current form, effectively shields these forms of harmful speech when amplified by algorithmic systems. It serves as both an unprecedented corporate liability shield and a license for technology companies to amplify certain voices while suppressing others. To truly uphold First Amendment freedoms, we must hold accountable the algorithms that drive harmful virality while protecting human expression.
Yes, some people misunderstand the First Amendment that way, but no, Section 230 does not shield “those forms of harmful speech.” Also, the “incitement to violence” is from the Brandenburg Test and is technically “incitement to imminent lawless action” which is not the same thing as “incitement to violence.” To pass the Brandenburg test, the speech has to be “intended to incite or produce imminent lawless action, and likely to incite such action.”
This is an extremely high bar, and nearly all harassment does not cross that bar.
Also, this completely misunderstands Section 230, which does not actually “shield these forms of harmful speech.” If the speech is actually illegal under the First Amendment, Section 230 does absolutely nothing to “shield” it. All 230 does is say that we place the liability on the speaker. If the speech actually does violate the First Amendment (and, as we’ll get to, this piece plays fast and loose with how the First Amendment actually works), then 230 doesn’t stand in the way at all of holding the speaker liable.
Yet, this piece seems to argue that if we got rid of Section 230 and somehow forced websites to only moderate to the Brandenburg standard, it would somehow magically stop harassment.
The choice before us is not binary between unchecked viral harassment and heavy-handed censorship. A third path exists: one that curtails viral harassment while preserving the free exchange of ideas. This balanced approach requires careful definition but is achievable, just as we’ve defined limits on viral financial transactions to prevent Ponzi schemes. Current engagement-based optimization amplifies hate and misinformation while discouraging constructive dialogue.
To put it mildly, this is delusional. This “third path” is basically just advocating for dictatorial control over speech.
This is a common stance for people with literally zero experience with the challenges of trust & safety and content moderation. These people seem to think if only they were put in charge of writing the rules, it’s possible to write perfect rules that stop the bad stuff but leave the good stuff.
That’s not possible. And anyone with any experience in a trust & safety role would know that. Which is why it would be great if non-experts stopped cosplaying as if they understand this stuff.
There’s a reason that we created two separate trust & safety and content moderation games to help people like the authors of this piece understand that it’s not so simple. People are complicated. So many things involve subjective calls in murky gray areas, that even experts in the field who have spent years adjudicating these things rarely agree on how best to handle different situations.
Our proposed “repeal and renew” approach would remove the liability shield for social media companies’ algorithmic amplification while protecting citizens’ direct speech. This reform distinguishes between fearless speech—which deserves constitutional protection—and reckless speech that causes demonstrable harm. The evidence of such harm is clear: from the documented mental health impacts of engagement-optimized content to the spread of child sexual abuse material (CSAM) through algorithm-driven networks.
Ah, so your problem is with the First Amendment, not Section 230. The idea that only “fearless speech” deserves constitutional protection is a lovely fantasy for law professors, but it’s not the law. And never has been. You would need to first completely dismantle over a century’s worth of First Amendment jurisprudence before we even get to the question of 230, which wouldn’t do what you want it to do in the first place.
Under the First Amendment, “reckless speech” remains protected, except in some very specific, well-delineated cases. And you can’t just wave your arms and pretend otherwise, even though that’s what Stanger, Lanier, and Tang do here.
That’s not how it works.
And, because the three of them seem to be coming up with simplistically wrong solutions to inherently complex problems, let’s dig in a bit more on the examples they have. First off, CSAM is already extremely illegal and not protected by either the First Amendment or Section 230. So it’s bizarre that it’s even mentioned here (unless you don’t understand how any of this works).
But how about “the documented mental health impacts of engagement-optimized content”? That’s… not actually proven? This has been discussed widely over the last few years, but the vast majority of research finds no such causal links. Yes, you have a few folks who claim it’s proven, but many of the leading researchers in the field, and multiple meta-analyses of the research have found no actual evidence to support a causal link between social media and mental health.
So… then what?
Stanger, Lanier, and Tang seem to take it as given that such harm is there, even as the evidence has disagreed with that claim. Do we wave a magic wand and say “well, because these three non-experts insist that social media is harmful to mental health that we suddenly make such content… no longer protected under the First Amendment?”
That’s not how the First Amendment works, and it’s not how anything works.
Or, how about we take a more specific example, even though it’s not directly raised in the article. One area of content that many people are very concerned about is “eating disorder content.” Based on what’s in this article, I’m pretty sure that Stanger, Lanier, and Tang would argue that, obviously, eating disorder content should be deemed “harmful” and therefore unprotected under the First Amendment (again, this would require a massive change to the First Amendment, but let’s leave that fantasyland in place for a moment.)
Okay, but now what?
Multiple studies have shown that (1) determining what actually is “eating disorder content” is way more difficult than most people think, because the language around it is so ever-changing, to the point that sometimes people argue that photos of gum are “eating disorder content” and (2) perhaps more importantly, simply removing eating disorder content has been shown to make eating disorder issues worse for some users!
Often, this is because eating disorder content is a demand-side issue, where people are looking for it, rather than being driven to eating disorders based on the content. Removing it often just drives those seeking it out into darker corners of the internet where, unlike in the mainstream areas of the internet, they’re less likely to see useful interventions and resources (including help from others who have recovered from eating disorders).
So, what should be done here? Under the Stanger/Lanier/Tang proposal, the answer is to make such content illegal and require websites to block it, even though that likely does even more harm to vulnerable people.
And that’s ignoring the whole First Amendment problem. Repeatedly throughout the article, Stanger/Lanier/Tang handwave around all this by suggesting that you can create a new law that concretely determines what content is allowed (and must be carried) and what content is not.
But that’s not how it works in both directions. The law can no more compel websites to keep up speech they don’t want to host, than it can force them to take down content the three authors think is “harmful” but does not pass the existing tests regarding what is not protected under the First Amendment.
Given its many problems regarding the authors’ understanding of speech, it will not surprise you that they trot out the “fire in a crowded theater” line, which is the screaming siren of “this is written by people unfamiliar with the First Amendment.”
Just as someone shouting “fire” in a crowded theater can be held liable for resulting harm, operators of algorithms that incentivize harassment for engagement should face accountability.
Earlier in the piece, they pointed (incorrectly) to the Brandenburg test on incitement to imminent lawless action. Given that, you might think that someone might have pointed out to them that Brandenburg effectively rejected Schenck, the case in which the “fire in a crowded theater” line was uttered as dicta (i.e., not controlling or meaningful). But, nope. They pretend it’s the law (it’s not), just like they pretend the Brandenburg standard can magically be extended to harassment (it cannot).
The piece concludes with even more nonsense:
Section 230 today inadvertently circumvents the First Amendment’s guarantees of free speech, assembly, and petition. It enables an ad-driven business model and algorithmic moderation that optimize for engagement at the expense of democratic discourse. Algorithmic amplification is a product, not a public service. By sunsetting Section 230 and implementing new legislation that holds proprietary algorithms accountable for demonstrable harm, we can finally extend First Amendment protections to the digital public square, something long overdue.
Literally every sentence of that paragraph is wrong. Harvard should be ashamed for publishing something that would flunk a first-year Harvard Law class. Section 230 does nothing to “circumvent” the First Amendment. The First Amendment does not guarantee free speech, assembly, and petition on private property. It simply limits the government from suppressing it. Private property owners still have the editorial discretion to do as they wish, which is supported by Section 230.
As for the claim that you can magically apply liability to “algorithmic amplification” and not have that violate the First Amendment, that’s also wrong. We discussed that just last week, so I’m not going to rehash the entire argument. But algorithmic amplification is literally speech as well, and it is very much protected under the First Amendment as an opinion on “we think you’d like this.” You can’t just magically move that outside of the First Amendment. That’s not how it works.
The point is that this piece is not serious. It does not grapple with the realities of the First Amendment. It does not grapple with the impossibilities of content moderation. It does not grapple with the messiness of societal level problems with no easy solution. It ignores the evidence on social media’s supposed harms.
It sets up a fantasyland First Amendment that does not exist, it misrepresents what Section 230 does, it mangles the concept of “harms” in the online speech context, and it punts on what the simple “rules” they think they can write to get around all of that would be.
It’s embarrassing how disconnected from reality the article is.
Yet, Harvard’s Kennedy School was happy to put it out. And that should be embarrassing for everyone involved.