Добавить новость


News in English


Новости сегодня

Новости от TheMoneytizer

AI’s biggest problem isn’t intelligence. It’s implementation

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here.

The AI ‘arms race’ may be more of an ‘arm-twist’

The big AI companies tell us that AI will soon remake every aspect of business in every industry. Many of us are left wondering when that will actually happen in the real world, when the so-called “AI takeoff” will arrive. But because there are so many variables, so many different kinds of organizations, jobs, and workers, there’s no satisfying answer. In the absence of hard evidence, we rely on anecdotes: success stories from founders, influencers, and early adopters posting on X or TikTok.

Economists and investors are just as eager to answer the “when” question. They want to know how quickly AI’s effects will materialize, and how much cost savings and productivity growth it will generate. Policymakers are focused on the risks: How many jobs will be lost, and which ones? What will the downstream effects be on the social safety net?

Business schools and consulting firms have turned to research to find those answers the question. One of the most consequential recent efforts was a 2025 MIT study, which found that despite spending between $30 billion and $40 billion on generative AI, 95% of large companies had seen “no measurable P&L [profit and loss] impact.”

More recent research paints a somewhat rosier picture. A recent study from the Wharton School found that three out of four enterprise leaders “reported positive returns on AI investments, and 88% plan to increase spending in the next year.”

My sense is that the timing of AI takeoff is hard to grasp because adoption is so uneven and depends a lot on the application of the AI. Software developers, for example, are seeing clear efficiency gains from AI coding agents, and retailers are benefiting from smarter customer-service chatbots that can resolve more issues automatically.

It also depends on the culture of the organization. Companies with clear strategies, good data, some PhDs, and internal AI enthusiasts are making real progress. I suspect that many older, less tech-oriented, companies remain stuck in pilot mode, struggling to prove ROI. 

Other studies have shown that in the initial phases of deployment, human workers must invest a lot of time correcting or training AI tools, which severely limits net productivity gains. Others show that in AI-forward organizations, workers do see substantial productivity improvements, but because of that, they become more ambitious and end up working more, not less.

The MIT researchers included an interesting disclaimer on their research results. Their sobering findings, they noted, did not reflect the limitations of the AI tools themselves, but rather the fact that organizations often need years to adapt their people and processes to the new technology.

So while AI companies constantly hype the ever-growing intelligence of their models, what ultimately matters is how quickly large organizations can integrate those tools into everyday work. The AI revolution is, in this sense, more of an arm-twist than an arms race. The road to ROI runs through people and culture. And that human bottleneck may ultimately determine when the AI industry, and its backers, begin to see returns on their enormous investments.

New benchmark finds that AI fails to do most digital gig work

AI companies keep releasing smarter models at a rapid pace. But the industry’s primary way of proving that progress—benchmarks—doesn’t fully capture how well AI agents perform on real-world projects. A relatively new benchmark called the Remote Labor Index (RLI) tries to close that gap by testing AI agents on projects similar to those given to remote contractors. These include tasks in game development, product design, and video animation. Some of the assignments, based on actual contract jobs, would take human workers more than 100 hours to complete and cost over $10,000 in labor.

Right now, some of the industry’s best models don’t perform very well on the RLI. In tests conducted late last year, AI agents powered by models from the top AI developers including OpenAI, Anthropic, Google, and others could complete barely any of the projects. The top-performing agent, powered by Anthropic’s Opus 4.5 model, completed just 3.5% of the jobs. (Anthropic has since released Opus 4.6, but it hasn’t yet been evaluated on the RLI.)

The test puts the question of the current applicability of agents in a different light, and may temper some of the most bullish claims about agent effectiveness coming from the AI industry. 

Silicon Valley’s pesky ‘principals’ re-emerge, irking the White House and Pentagon

The Pentagon and the White House are big mad at the safety-conscious AI company Anthropic. Why? Because Anthropic doesn’t want its AI being used for the targeting of humans by autonomous drones, or for mass surveilling U.S. citizens. 

Anthropic now has a $200 million contract allowing the use of its Claude chatbot and models by federal agency workers. It was among the first companies to get approval to work with sensitive government data, and the first AI company to build a specialized model for intelligence work. But the company has long had clear rules in its user guidelines that its models aren’t to be used for harm. 

The Pentagon believes that after paying for the technology it should be able to use it for any legal application. But acceptable use for AI is different from that for traditional software. AI’s potential for autonomy makes it more dangerous by nature, and its risks increase the closer to the battle it gets used. 

The disagreement, if not resolved, could potentially jeopardize Anthropic’s contract with the government. But it could get worse. Over the weekend, the Pentagon said it was considering classifying Anthropic as a “supply chain risk,” which would mean the government views Anthropic as roughly as trustworthy as Huawei. Government contractors of all kinds would be pushed to stop using Anthropic.

Anthropic’s limits on certain defense-related uses are laid out in its Constitution, a document that describes the values and behaviors it intends its models to follow. Claude, it says, should be a “genuinely good, wise, and virtuous agent.” “We want Claude to do what a deeply and skillfully ethical person would do in Claude’s position.” To critics in the Trump administration, that language translates to a mandate for wokeness.

The whole dust-up harkens back to 2018, when Google dropped its Project Maven contract with the government after employees revolted against Google technology being used for targeting humans in battle. Google still works with the government, and has softened its ethical guidelines over the years.

The truth is, tech companies don’t stand on principle like they used to. Many have settled into a kind of patronage relationship with the current regime, a relatively inexpensive way to avoid MAGA backlash while keeping shareholders satisfied. Anthropic, in its way, seems to be taking a different course, and it may suffer financially for it. But, in the longer term, the company could earn some respect, trust, and goodwill from many consumers and regulators. For a company whose product is as powerful and potentially dangerous as consumer AI, that could count for a lot. 

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

Читайте на сайте


Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. Абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city
Музыкальные новости
Новости России
Экология в России и мире
Спорт в России и мире
Moscow.media










Топ новостей на этот час

Rss.plus