Добавить новость


Новости сегодня

Новости от TheMoneytizer

Anthropic says it has identified thousands of 'fraudulent accounts' taking Claude and 'extracting its capabilities to train and improve their own models'

The question of what data AI models are trained on, and the legitimacy of that data, is a thorny one. Anthropic found itself defending its use of copyrighted material to train its Claude AI in the US last year, a case that eventually resulted in a ruling that its copyrighted scraping fell under fair use privileges.

However, the company eventually agreed to pay a $1.5 billion settlement in regards to claims that it pirated copies of several author's works. I mention this, because Anthropic has recently taken to X to complain about "industrial-scale distillation attacks" on Claude, perpetrated by what it says are over "24,000 fraudulent accounts" that have generated over 16 million exchanges with the AI chatbot, thereby "extracting its capabilities to train and improve their own model."

Which, as far as Anthropic is concerned, really isn't on. It identifies DeepSeek, Moonshot AI, and MiniMax as the perpetrators of the attacks, and while it says that "distillation can be legitimate", it also declares: "Foreign labs that illicitly distil American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surveillance systems."

In a further post, Anthropic says: "These attacks are growing in intensity and sophistication. Addressing them will require rapid, coordinated action among industry players, policymakers, and the broader AI community," before linking out to a news post on the topic.

The post goes into further detail regarding the discovery of the attacks, and also says that Anthropic was able to attribute "each campaign to a specific lab with high confidence through IP address correlation, request metadata, infrastructure indicators, and in some cases corroboration from industry partners."

Which, as X user AntonLaVay points out, sounds like Anthropic loudly declaring that it can de-anonymize its users with relative ease. That's perhaps a privacy-related point for another day.

In the meantime, though, it seems that while Anthropic is fine with training its own models on copyrighted data, other companies using Anthropic's work to train their own models is a serious problem.

And while the foreign military angle is certainly an interesting one, I've got a feeling it might not engender the same sort of sympathy as that given to private individuals who claim to have had their work incorporated into the Claude AI behemoth. Just a thought.

Читайте на сайте


Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. Абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city
Музыкальные новости
Новости России
Экология в России и мире
Спорт в России и мире
Moscow.media










Топ новостей на этот час

Rss.plus