If You’re Going To Defend AI And Whine About Its Critics, You Should Probably Be Honest About Its Actual Harms
I think this recent post by AI industry CEO Matt Shumer is worth a read. In it, he basically explains how quickly LLMs (large language models) are evolving to supplant many developers and programmers, and how that disruption is coming to other industries quickly. He also warns critics of AI to adjust their priors and realize the AI tools you mocked just six months ago, aren’t the ones in use today:
“I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.”
While the post is interesting (with the understanding this is somebody making and selling automation software), you might notice something: absolutely nowhere in the blog post does he meaningfully acknowledge the widespread problems with existing AI use. Either because his financial self-interest doesn’t allow for honest acknowledgment of them, or because he simply doesn’t find those aspects all that interesting.
Maybe both.
There’s no mention of how these tools are causing corporations to blow past their already tepid climate goal; no mention of how the affluent, surveillance-obsessed exec dictating its trajectory enthusiastically cozied up to fascists; no mention of how Elon Musk and Mark Zuckerberg’s data centers are funneling pollution directly into black neighborhoods; zero mention of the technofascist plan to leverage AI to decimate unions; no mention of the weird and precarious financial shell games powering the sector.
This New York Times article from a couple weeks ago is probably a better example of this art form. It’s an article, ostensibly about why the public has been so hostile to AI, that takes until the THIRTY-EIGHTH paragraph to actually try and explain some of the reasons. And even then it’s kind of a throwaway paragraph that doesn’t wrestle seriously with any of the criticism:
“The tech executives who are betting their companies’ futures on the triumph of A.I. have many resources to make sure it happens. They can spend even more money to build even more data centers. On the other hand, data centers around the country are increasingly a target of opposition for local residents who dislike the noise, the disruption, the secrecy and the lack of community benefits like jobs.”
Distilling the animosity against AI as just some random grumbling about “noise” and ambiguous “disruption” is a very weird and conscious choice, and I’d argue that this minimization, a reflection of the establishment press’ need to appease and protect access to corporate power, is itself a major contributor to growing hostility toward AI.
The fact that much of the public animosity to AI may be linked to the fact that its salesmen have overtly and enthusiastically enabled fascism just isn’t mentioned. The Times doesn’t think that’s relevant.
The fact that many U.S. billionaires see AI largely as a way to lazily cut corners and obliterate unions (see: its rushed adoption in journalism outlets like the LA Times or Politico) isn’t mentioned either. That the goal for most AI executives is to power this latest technological revolution completely free of any corporate oversight whatsoever? Again, somehow not deemed relevant.
Stories like this cling to a narrative that vaguely imply people are generally angry about AI due to some ambiguous flaw in their “perception,” likely caused by the way AI is being portrayed to the public on the tee vee:
“The A.I. companies seem increasingly alert to a perception problem. This year’s Super Bowl featured A.I.-themed ads that were defensive or just odd. Amazon’s ad showed A.I. proposing ways to kill Chris Hemsworth. The twist at the end: A.I. disarms him with a promised massage.”
And while there certainly are people who are intractably hostile to all aspects of automation and simply refuse to engage with it on any level (including understanding it), a huge swath of the animosity is being driven by historic and justified anger at the extraction class.
That anger and energy is good, and just, and will likely serve us well in the months and years to come. I’d argue it deserves a wide berth; including by tech industry insiders and AI advocates who don’t want to live under permanent kakistocracy staffed by weird zealots who operate at a third-grade reading level, openly enthusiastic about their grand visions for a permanent mass-surveillance murder autocracy.
Stories like this Times piece will often fixate on the AI “doomer narrative” (SkyNet will kill us all), but downplay that this specific strain of doomerism (very often pushed by wealthy industry insiders), often exists to both misrepresent what LLMs are capable of, but also to direct attention away from more realism-based criticism the industry doesn’t really want to talk about.
That’s not to say people can’t or shouldn’t be excited by evolutions in automation. But it is to say if you’re an AI advocate and you’re not also talking seriously about the very valid reasons so many people are pissed off, you’re not really talking seriously about the subject at all. You’re in marketing.