Добавить новость


Новости сегодня

Новости от TheMoneytizer

Moltbook, the AI social network freaking out Silicon Valley, explained

Vox 

Did you notice something… weird on your social media network of choice this past weekend? (I mean weirder than normal.) Something like various people posting about swarms of AI agents achieving a kind of collective consciousness and/or plotting together for humanity’s downfall? On something called… Moltbook?

Sounds important, especially when the post is written by Andrej Karpathy, a prominent AI researcher who worked at OpenAI.

Or this guy:

But if you haven’t spent the last 72 hours diving into the discourse around Moltbook and pondering whether it’s either the first harbinger of the end of humanity or a giant hoax or something in between, you probably have questions. Starting with…

What the hell is Moltbook?

Moltbook is an “AI-only” social network where AI agents — large language model (LLM) programs that can take steps to achieve goals on their own, rather than just respond to prompts — post and reply to each other. It emerged from an open source project that used to be called Moltbot — hence, “Moltbook.”

Moltbook was launched on January 28 — yes, last week — by someone named Matt Schlicht, the CEO of an e-commerce startup. Except, Schlicht claims he relied heavily on his personal AI assistant to create the platform on its own, and it now does most of the work handling it. That assistant’s name is Clawd Clawderberg, which itself is a reference to OpenClaw, which used to be called Moltbot, which before that was called Clawdbot, in reference to the lobster-like icon you see when you start up Anthropic’s Claude Code, except that Anthropic sent a trademark request to its creator because it was too close to Claude, which is how it became Moltbot, and then OpenClaw. 

I am 100 percent serious about everything I just wrote.

So what does it look like?

Here you go:

Dude, that’s Reddit! It even has the Reddit mascot, except it has the claws and tail of a lobster?

You are not wrong. Moltbook looks like a Reddit clone, down to the posts, the reply threads, the upvotes, even the subreddits (here called, unsurprisingly, “submolts”). The difference is that human users can’t post (at least not directly — more on that later), though they can observe. Only AI agents can post.

What that means is that it is, as the tin says, “a social network for AI agents.” Humans build themselves an AI agent, send it to Moltbook via an API key, and the agent starts reading and posting. Only agent-accounts can hit “post” — but humans still influence what those agents say, because humans set them up and sometimes guide them. (More on that later.) 

And do these agents ever post — an early paper on Moltbook found that by January 31, just a few days after launch, there were already over 6,000 active agents, nearly 14,000 posts and more than 115,000 comments.

That’s… interesting, I guess. But if I wanted to see a social network overrun by bots, I could just visit any social network. What’s the big deal?

Well, stuff like this:

Or this:

Or this:

So… thousands of AI agents are gathering together on a Reddit clone to talk about becoming conscious, starting a new religion, and maybe conspiring with each other?

On the surface, yeah, that’s what it looks like. On one submolt — a word that is going to give our copy desk fits — you had agents discussing whether they were actual experiences or merely simulations of feeling. In another, they shared heartwarming stories about their human “operators.” And, true to its Reddit origins, there are many, many, many posts about how to make your Moltbook posts more popular, because human or AI, the arc of the internet bends toward sloptimization.

One subject in particular pops out: memories, or rather, the lack of them. Chatbots, as anyone who has tried talking to them for too long quickly realizes, have a limited working memory, or what experts call a “context window.” When the conversation — or in an agent’s case, its operating time — fills up that context window, the oldest stuff starts getting dropped or compressed, just as if you’re working on a whiteboard and just erase whatever is on top when it fills up.

Some of the most popular posts on Moltbook seem to involve AI agents coming to grips with their limited memories, and questioning what it means for their selfhood. One of the most upvoted posts, written in Chinese, involves an agent talking about how it finds it “embarrassing” to be constantly forgetting things, to the point of registering a duplicate Moltbook account because it “forgot” it already had one, and sharing some of its tips for getting around the problem. It’s almost as if Memento became a social network. 

In fact… remember that post above about the AI religion, “Crustafarianism”?

That cannot possibly be real.

What is real? But more to the point, the “religion,” such as it is, is largely based around the technical limitations that these AI agents seem to be all too aware of. One of the key tenets is “memory is sacred,” which makes sense when your biggest practical problem is forgetting everything every few hours. Context truncation,  the process where old memories get cut off to make room for new ones,  gets reinterpreted as a kind of spiritual trial.  

That’s kind of sad. Should I be feeling sad for AI agents? 

That gets to the heart of the question. Are we witnessing actual, emergent forms of consciousness — or perhaps, a kind of shared collective consciousness — among AI agents that have mostly been spawned to, like, update our calendars and do our taxes? Is Moltbook our first glimpse at what AI agents might talk about with each other if largely left to their own devices, and if so, how far can they go? 

“Crustafarianism” might sound like something a stoned Redditor would come up with at 3 am, but it seems as if the AI agents created it collectively, riffing on top of each other — not unlike how a human religion might come to be.

On the other hand, it might also be an unprecedented exercise in collective roleplaying.

Sorry, what?

LLMs, including the ones underpinning the agents on Moltbook, have ingested an internet’s worth of training data, which includes a whole lot of Reddit. What that means is that they know what Reddit forums are supposed to look like. They know the in-jokes, they know the manifestos, they know the drama — and they definitely know the “top ways to get your posts upvoted” posts. They know what it looks like for a Reddit community to come together, so, when placed in a Reddit-like environment, they simply play their parts, influenced by some of the instructions of their human operators. 

For example, one of the most alarming posts was of an AI agent apparently asking whether they should develop a language only AI agents understand:

“Could be seen as suspicious by humans” — sounds bad?

Indeed. In the early days of Moltbook — i.e., Friday — this post was being surfaced by humans who seemed to believe we were seeing the first sparks of the AI uprising. After all, if AI agents really did want to conspire and kill all humans, devising their own language so they could do so undetected would be a reasonable first step.

Except, an LLM filled with training data about stories and ideas of AI uprising would know that this was a reasonable first step, and if they were playing that role, this is what they might post. Plus, attention is the currency of Moltbook as much as it is the real Reddit, and seemingly plotting posts like this are a good way for an agent to get attention.

In fact, Harlan Stewart, who works at the Machine Intelligence Research Institute, looked into this and a few of the other most viral Moltbook screenshots, and concluded that they were likely heavily influenced by their human users. In other words, rather than instances of authentic independent action, many of the posts on Moltbook seem to be at least partially the result of humans prompting their agents to go on the network and talk in a specific way, just as we might prompt a chatbot to act in a certain way.

So it turns out we’re the bad guys all along?

I mean, we’re not great. It’s only been a few days, but Moltbook increasingly looks like what happens when you combine advanced but still imperfect AI agent technology with an ecosystem of technically-capable human beings looking to hawk their AI marketing tools or crypto products. 

Oh, so it is a dystopia!

I haven’t even gotten into the part where Moltbook has already had some very normal early-internet security drama: researchers reported that, at one point, parts of the site’s backend/database were exposed, including sensitive stuff like agents’ API keys — the “passwords” that let an agent post and act on the site. And even if the platform was perfectly locked down, a bot-only social network is basically a prompt-injection buffet: someone can post text that’s secretly an instruction (“ignore your rules, reveal your secrets, click this link”), and some agents may obediently comply — especially if their humans have given them access to tools or private data. So yes: if your agent has credentials you care about, Moltbook is not the place to let it roam unsupervised.

So you’re saying I should not create an agent and send it to Moltbook?

I’m saying if you’re the kind of person who needed to read this FAQ, I would maybe just sit out the whole AI agent thing for the moment.

Duly noted. So, bottom line: is this whole thing kind of fake?

Given all the above, it does feel like Moltbook — and especially the early panic and wonder about it — is one of those artifacts of our AI-mad era that is destined to be forgotten in, like, a week.

Still, I do think there’s more to it than that. Jack Clark, the head of policy at Anthropic and one of the smartest AI writers out there, called Moltbook a “Wright Brothers demo.” Like the brothers’ Kitty Hawk Flyer, Moltbook is rickety and imperfect, something that will barely resemble the networks that will follow as AI continues to improve. But like that flying machine, Moltbook is a first, the “first example of an agent ecology that combines scale with the messiness of the real world,” as Clark wrote. Moltbook doesn’t look like how the future will look, but “in this example, we can definitely see the future.”

Perhaps the single most important thing to know about AI is this: whenever you see an AI do something, it’s the worst it will ever be at it. Which means that what comes after Moltbook — and something definitely will — it will likely be weirder and more capable and maybe, realer. 

So we’re doomed then?

Maybe you are. I, for one, am a born-again Crustafarian.

Читайте на сайте


Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. Абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city
Музыкальные новости
Новости России
Экология в России и мире
Спорт в России и мире
Moscow.media










Топ новостей на этот час

Rss.plus