Добавить новость


Новости сегодня

Новости от TheMoneytizer

OpenAI is a drama company. Will that hurt its IPO chances? And Anthropic tries to get ahead of the cyber risks its own models are accelerating

Hello and welcome to Eye on AI. In this edition…lots and lots of OpenAI news…Anthropic secures more compute from Google as its current capacity is strained…Google DeepMind releases its latest open weight Gemma model…Anthropic says AI has emotions (sort of)…and Google DeepMind shows AlphaEvolve can help solve real world enterprise problems.

OpenAI dominated the news over the past few days. In fact, so much has happened related to the company that it’s hard to know where to start. It’s also hard to discern which OpenAI development will prove, with the benefit of hindsight, to be the most significant. I’ll cover the OpenAI news in a sec.

But first, I want to highlight three pieces of news from Anthropic because I think, in the long-run, they might matter more than any of the OpenAI stuff.

Anthropic unveiled today what it is calling Project Glasswing, a coalition of major technology companies and cybersecurity players, that is dedicated to trying to secure the world’s most critical software before AI-enabled hackers wreak absolute havoc around the globe. The coalition partners have been given access to a special cybersecurity-focused preview version of Anthropic’s yet-to-be-released Mythos model, in the hopes that Mythos can discover zero day attacks and other vulnerabilities and that they can be patched, before a production version of Mythos and similar AI models with superpowerful cyber capabilities from OpenAI and Google, debut. My colleague Beatrice Nolan, who broke the news about Mythos’ existence a few weeks ago, has the news on Project Glasswing here.

Project Glasswing is further evidence of the growing concern within the AI labs, cybersecurity companies, and among government officials, that we are entering an era of unprecedented and potentially catastrophic cybersecurity threats due to the increased coding capabilities of recent AI models. The New York Times has more on that evolving risk in this story here.

Anthropic also announced that it would no longer allow people to use their monthly Claude subscriptions to power third-party agentic harnesses, such as the virally-popular OpenClaw and its prodigy. Now, in order to use Claude to power these tools, people will need to subscribe to Anthropic’s API and pay per-token usage fees, as opposed to using all-you-can-consume monthly subscriptions. Anthropic has in recent weeks shown that it does not have the computing capacity to handle the skyrocketing adoption rates it has experienced, especially with agentic tools like OpenClaw (Anthropic also imposed strict usage caps during peak hours that have annoyed many users.) In part to address this compute crunch, Anthropic announced an expanded partnership with Google and Broadcom to access data centers running Google’s TPU chips coming online by 2027. (More on that below.) But, in the meantime, Anthropic’s decision may have a big impact on how AI agents get used, perhaps slowing adoption, or perhaps driving many more people to start using open-source models as the brains behind these agents.

Anthropic also said it has achieved an annual revenue “run rate” of $30 billion. The figure implies a 58% revenue surge in March alone. The number is also higher than the $25 billion annual revenue run rate OpenAI reported in February. (Although Anthropic and OpenAI don’t use the same method to calculate their run rates, so it is a bit of an apples to orange comparison.) But it clearly shows that Anthropic is on a tear and that matters, especially in light of the other news coming out of OpenAI.

Ok, so without further ado , the OpenAI stuff:

OpenAI likes ‘constructive’ media coverage, so it’s buying some

The OpenAI development that probably matters least, but which nonetheless had everyone in the media talking, is OpenAI’s decision to buy the year-old vodcaster TBPN (Technology Business Programming Network) for an amount that sources told the Financial Times was in “the low hundreds of millions.” OpenAI, in announcing the deal, said that it’s “become clear the standard communications playbook just doesn’t apply to us,” and that the company needed “to help create a space for a real, constructive conversation about the changes AI creates—with builders and people using the technology at the center.”

The word “constructive” here is doing a lot of work. While OpenAI insisted that TBPN would retain its editorial independence, many are skeptical, noting that, among other things, the video broadcast operation will report to Chris Lehane, the bare knuckled-political operator who serves as OpenAI’s policy communications chief. This seems like just the latest and perhaps most extreme case of a tech company trying to control the narrative by “going direct”—using social media and in-house produced content to reach audiences and bypass traditional journalistic outlets that are often more critical and tend to ask the kinds of questions that executives don’t want to answer.

Altman’s honesty questioned

If it weren’t already clear why OpenAI wants to own the messenger and dislikes traditional journalism, then the New Yorker underscored the rationale by publishing a lengthy profile of OpenAI CEO Sam Altman that was the result of a year-and-a-half of investigative reporting by Ronan Farrow and Andrew Marantz. The piece was headlined “Sam Altman may control our future—can he be trusted?” Reading the piece, it is hard to come away with an answer other than: no.

While there are a few new tidbits in the story—the reporters, for instance, obtained hundreds of pages of notes that Dario Amodei, now the Anthropic CEO, made on his interactions with Altman during the time Amodei was a top OpenAI researcher—many of the facts in the story have already been reported elsewhere. Nonetheless, there is impact in seeing them all assembled in one place. The overriding impression of Altman from Farrow and Marantz’s story is of a borderline sociopath; an executive with no compunction about lying to get ahead. The piece raises questions about how sincere Altman is in his commitment to anything other than his own pursuit of power—and in particular asks whether Altman actually cares about AI safety or whether his rhetoric on that subject is simply a convenient pose used, first to win over early funding for OpenAI from Elon Musk, and later to win over and retain talented AI researchers and keep regulators at bay.

Certainly potential IPO investors don’t generally love companies run by pathological liars. They also don’t like companies where the top executive ranks are constantly being reshuffled. But OpenAI last week announced another executive shakeup. It said Fidji Simo, who has the title “CEO of AGI Deployment” and is in charge of all the company’s commercial products and operations, will be taking several weeks of medical leave to deal with a chronic health condition. In her absence, Greg Brockman, who had been largely focused on the company’s AI infrastructure build out, is going to be put in charge of product.

But then OpenAI also announced a more permanent management shuffle. The company said that Brad Lightcap, its long-serving chief operating officer, is moving to a new role coordinating “special projects,” including a joint venture with private equity firms that will look to use AI to push efficiencies into older, non-tech companies. Denise Dresser, the former Slack CEO recently hired by OpenAI to serve as chief revenue officer, is taking on most of Lightcap’s previous duties, with oversight of the other business and operations units being split between Jason Kwon, OpenAI’s chief strategy officer, and CFO Sarah Friar.

Reported divisions over spending and IPO plans

Meanwhile, a story surfaced that might suggest Friar may not be secure in her role either. The Information reported that Friar has privately disagreed with Altman’s timeline for an IPO and voiced concerns about the company’s $600 billion in spending commitments over the next five years. Citing a person who had spoken to Friar about her views, the publication said Friar has said she is unsure if that huge amount of spending was necessary or whether OpenAI would be able to grow revenue fast enough to support it.

The publication said that Friar had voiced these concerns prior to OpenAI’s $122 billion fundraise—which was announced last week and valued OpenAI at $852 billion post-money. It said it was unable to determine whether her position had changed in light of that new money. But it cited another unnamed source as saying Friar had been left out of a meeting with an OpenAI investor in which major AI infrastructure spending plans were discussed. OpenAI gave the publication a statement saying Friar and Altman  “are fully aligned that durable access to compute is at the core of OpenAI’s strategy and a key differentiator as we scale.”

Looking at all the developments together, one could be forgiven for wondering if the wheels are in danger of coming off the world’s best-known AI company. At the very least, there are serious questions looming over OpenAI’s ability to go for an IPO this year. And, in the absence of an IPO, it’s unclear how much longer the company can continue to tap the private market. If OpenAI implodes, or even if it merely has a down round, that could threaten the entire AI ecosystem. Of course, other key players in that ecosystem, such as Nvidia, know this too. That’s why they are likely to continue trying to prop OpenAI up.

In the midst of all of this, OpenAI published a white paper calling for a sweeping new industrial strategy for the U.S. in the age of artificial superintelligence, which it says is now looming into view. (You can read more on that from my colleague Sharon Goldman here.) Many perceived the document as, at least in part, an attempt by OpenAI to get ahead of a looming anti-AI industry backlash that is mounting across the country and is gaining bipartisan support. We’ll cover that in the news section below.

With that, here’s more AI news.

This story was originally featured on Fortune.com

Читайте на сайте


Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. Абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city
Музыкальные новости
Новости России
Экология в России и мире
Спорт в России и мире
Moscow.media










Топ новостей на этот час

Rss.plus