Добавить новость


Новости сегодня

Новости от TheMoneytizer

How AI Hijacked the Venezuela Story

Photo-Illustration: Intelligencer; Photos: @WallStreetApes/X

There’s a familiar story about AI disinformation that goes something like this: With the arrival of technology that can easily generate plausible images and videos of people and events, the public, unable to reliably tell real from fake media, is suddenly at far greater risk of being misled and disinformed. In the late 2010s, when face-swapping tools started to go mainstream, this was a common prediction about “deepfakes,” alongside a proliferation of nonconsensual nude images and highly targeted fraud. “Imagine a fake Senator Elizabeth Warren, virtually indistinguishable from the real thing, getting into a fistfight in a doctored video,” wrote the New York Times in 2019. “Or a fake President Trump doing the same. The technology capable of that trickery is edging closer to reality.”

Creating deepfakes with modern AI tools is now so trivial that the term feels like an anachronistic overdescription. Many of the obvious fears of generate-anything tech have already come true. Regular people — including children — are being stripped and re-dressed in AI-generated images and videos, a problem that has trickled from celebrities and public figures to the general public courtesy of LLM-based “nudification” tools as well as, recently, X’s in-house chatbot, Grok. Targeted and tailored fraud and identity theft are indeed skyrocketing, with scammers now able to mimic the voices and even faces of trusted parties quickly and at low cost.

The story of AI disinformation, though — an understandable if revealing fixation of the mainstream media beyond the LLM boom — turned out to be little bit fuzzier. It’s everywhere, of course: We no longer need to “imagine” the NYT’s doctored-video scenario in part because posting such videos is now part of the official White House communication strategy. Nearly every major news event now sees a flurry of realistic, generated videos redepicting it in altered terms. But the reality of news-adjacent deepfakes is more complicated than the old hypothesis that they’d be deployed by shadowy actors to mislead the vulnerable masses and undermine democracy, and in some ways more depressing.

As we begin 2026 with the American kidnapping of Venezuelan president Nicolás Maduro, the future of disinformation looks more like this post that was widely shared and reposted by Elon Musk:

Here we have fake clips of people in Venezuela celebrating Maduro’s capture, depicting something plausible: individuals relieved or ecstatic about a corrupt and authoritarian leader being removed. (In reality, domestic public celebration has been muted due to uncertainty about the future and fear of the administration still in power.) Narrowly, it’s a realization of the deepfake-disinformation thesis, an example of realistic media created with the intent and potential to mislead people who might believe it’s real.

The way it actually moved through the world tells a weirder story, though. Shadowy actors did create fake videos to advance a particular narrative in a chaotic moment, reaching millions of people. But the spread of these videos didn’t hinge, as the default AI-disinformation hypothesis suggested, on misunderstanding and an inability of curious but well-meaning people to discern the truth. Instead, they were passed around by a powerful person with a large following who is perhaps the best equipped in the entire world to know that they weren’t real — a politically connected guy who runs an AI company connected to a social network, with a fact-checking system, where he’s promoting posts inviting people to use his image generator to change Maduro’s captivity outfits — for an audience of people who do not care whether they are. In other words, they provided texture for a determined narrative, stock-image illustrations for wishful ideological statements, and something to find and share for audiences who were expecting or hoping to see something like this anyway.

For a clear example of an earnestly misled poster, we’re left with Grok, which took a break from automating CSAM to respond to skeptical users by saying the video “shows real footage of celebrations in Venezuela following Maduro’s capture” before, after being told to “take a proper look,” suggesting that the video “appears to be manipulated,” albeit in a completely incorrect way. (On X, the Musk-sanctioned spread of videos like this is less like a disinformation contagion running rampant and destroying a deliberative system than a news network simply deciding to show a fabricated video and refusing to admit that it’s fake.)

In contrast to the focused deception of a romance scam, or the targeted harassment or blackmail of an AI nudification, content like this isn’t valuable for its ability to persuasively deceive people but rather as ambient ideological slop for backfilling a desired political reality. Rather than triggering or inciting anything, it instead slotted in right next to other more familiar and less technologically novel forms of social-media propaganda — like simply saying a video portrays something it doesn’t and getting a million views for every thousand on the eventual debunkings:  

I don’t mean to brush off the dangers of AI video generation for the public’s basic understanding of the world or suggest that scenarios in which people are misled in politically significant ways by fake videos are impossible or even unlikely. But today, after the proliferation of omnipurpose deepfakelike technology came faster than almost anyone expected, the early result seems less like a fresh rupture with reality than an escalation of past trends and a fresh demonstration of the power of social networks, where ideologically isolated and incurious groups are now slightly more able to fortify their aesthetic and political experiences with consonant and infinite background noise under the supervision of people for whom documentary truth is at the very most an afterthought.

In the short term, the ability to generate documentary media by prompt hasn’t primarily resulted in chaos or “trickery” but rather previewed a perverse form of order in which persuasion is unimportant, disinformation is primarily directed at ideological allies, and everyone gets to see — or read — exactly what they want.

Читайте на сайте


Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. Абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city
Музыкальные новости
Новости России
Экология в России и мире
Спорт в России и мире
Moscow.media










Топ новостей на этот час

Rss.plus