Добавить новость

В Ленинском округе оштрафован водитель «Газели» за незаконный сброс мусора

Суд поддержал решение УФАС по Подмосковью в отношении ООО «Стройпроф»

Песков: Путину не нравится повышенное внимание к его семье

Как активисты помогают приюту для животных в Павловском Посаде



News in English


Новости сегодня

Новости от TheMoneytizer

California AI Bill Tells GenAI Startups To Nerd Harder

There’s a stunning degree of fear mongering and lack of humility about what California AI bill SB 942 can or can’t do. Honest conversation about this bill’s limitations are essential to ensuring we don’t pass this ineffective law. But its proponents have obstructed reasoned policy development by injecting panic into that conversation and pretending it will solve various multifaceted GenAI abuses that it simply cannot.

This article is a follow up to my first one, where I explain why SB 942’s forced disclosures and inclusion of AI-generated text were unworkable. Despite recent amendments that fix those two issues, the remaining requirement that nascent AI companies create “AI detection tools” make it still a fundamentally flawed bill.

SB 942 vaguely aims to “tackle the issue of GenAI-produced content” by requiring three things:

  1. AI providers must offer a free AI detection tool that identifies content generated using their service
  2. AI providers must offer users the option to include a conspicuous disclosure on content generated with their service. 
  3. AI providers must embed specific metadata into files and the generated content itself, including company name, version number, timestamp, and a unique identifier.

A theme that runs through nearly every aspect of this bill’s journey through the legislature is a mismatch between any given GenAI abuse and how this bill would address it. For example, a June 28 committee analysis clumsily tries to explain the danger of open-source AI and incorrectly implies that SB 942 will help counter it: 

“ChatGPT is an example of an open-sourced tool, meaning it is accessible to the public. Researchers and developers can also access its code and parameters. This accessibility increases transparency, but it has downsides: when a tool’s code and parameters can be easily accessed, they can be easily altered, and open-source tools have the potential to be used for nefarious purposes.”

While OpenAI has released some model weights, ChatGPT is famously not open-source. Here, open-source has been conflated with “widely available.” Ironically, this bill only applies to AI businesses, and not situations where actual open-source AI software may be abused. 

The analysis continues:

“The need for this bill is further highlighted by various instances and research underscoring the threats posed by unregulated GenAI.”

It goes on to describe three examples of GenAI abuse and again falsely implies this bill is the fix. Let’s talk about why that’s not true. 

Example 1:

In January, voters in New Hampshire received phone calls from an AI-generated voice clone of Joe Biden telling them not to vote in the primary election in order to save their vote for the upcoming general election. They caught the man responsible and he’s now facing a 6 million dollar fine and 13 felony charges. The FCC has also now made AI robocalls illegal in response.  

The bill’s three main provisions would not prevent this from happening again: 1) It would be impractical for consumers to record robocalls and then upload them to a detection tool. 2) Nefarious actors would opt out of the optional disclosures and 3) metadata for the fraudulent audiofile would be irrelevant in the context of an ephemeral phone call. 

Example 2:

A finance worker at a Hong Kong firm was persuaded to transfer $25 million to thieves when they were invited to participate in a video call with several other AI-generated colleagues.

This case shares the same pitfalls as the previous example: 1) An employee who thinks they may be in the middle of a fraudulent video call won’t have any way to upload that information to a detection tool during that session. 2) Bad actors will opt out of disclosures and 3) metadata will again be irrelevant.  

Example 3:  

California high school students have been caught generating nonconsensual nude images of their classmates.

1) Determining the authenticity of this content is useless and doesn’t address the harm it creates. 2) Even if a user opts for a disclosure, the content is still extremely harmful. 3) A hidden disclosure embedded into content might be helpful only in cases where no other evidence or context leads to the perpetrator. But this assumes that all companies can implement this still-developing idea across audio, video, and images. Requiring by law that this be implemented by startups is heavy handed and unrealistic.      


The analysis ends with a hypothetical that highlights yet another fundamental flaw:

“In theory, a person who views a video circulating on social media conveying President Joe Biden telling voters not to vote in the primary election could use a provider’s AI detection tool to upload and analyze the video. By examining the embedded machine-readable disclosures, the user could identify the provider’s name, the GenAI system used, and the creation date, concluding that the video was produced by AI. This process would reveal that the video is not genuine, thus, in theory, helping to prevent the spread of misinformation.”

That sounds nice, in theory. But in reality, it’s more complicated than that.

A social media user that uploads to an AI detection tool may not be returned any useful indication about a piece of content’s authenticity because the law only requires AI providers to determine if their service was used. The user would then need to make uploads to every AI provider’s tool until they get a hit, or give up before they do. The user will also need to understand that each tool has a different level of accuracy across video, image, and audio, and will need to account for the fact that there may be false positive and negative results. 

One alternative to this rigmarole is for users to do what they’ve always done when inquiring about suspicious online content: google it. 

But seriously, how many different AI detection websites will one have to potentially go to? Is it 5? 15? Maybe 50? I doubt anyone has contemplated this number. AI providers with over 1M monthly users will have to comply with this law, making the user threshold arbitrarily over inclusive given that there are at least 1,500 GenAI startups poised for growth. And the unlucky startups that already have 1M users will have only four months to develop this unproven technology before the law takes effect in January of next year.

By passing this law, California lawmakers would be telling an untold number of GenAI startups to nerd harder


And now we get to the weird part. 

In the July 2 hearing [2:55hr mark] for which our much-discussed committee analysis was prepared, co-drafter of the bill Tom Kemp again testified in support. 

One statement stood out in particular:

“Recent changes to the bill have addressed many opposition concerns coming out of the privacy committee. For example, a critic just recently wrote that the recent changes have quote, ‘fixed the biggest issues I’ve had with the bill.’ ” 

Who is this unnamed critic? Well, Kemp appears to quote a comment that I made while summarizing several amendments in a Linkedin post. Read next to my own commentary, the two phrases are almost identical:

“These changes fix the biggest issues I had with the bill, link in comments.”

These two phrases don’t appear anywhere else on the internet, suggesting Kemp did in fact quote me. That’s a shame because while amendments did fix two of three issues I originally pointed out, this short phrasing doesn’t reflect my full thoughts on the bill. SB 942, in my opinion, is still a hot mess. 

The amendments didn’t address the issue of compelled AI detection tools. As we’ve already discussed, requiring AI providers to create detection tools does not guarantee them to actually work well. The fact that we can’t rely on them 100% of the time means there will be false positives and negatives, which has already proved to be highly damaging. Mandating these tools by law, with no consideration for how inaccurate they may be, is tech solutionism and will only confuse people more about the authenticity of content they encounter. 

I regret that my choice of a few short words was used to promote this piece of legislation. To avoid any doubt, I’ve edited my Linkedin post to read:

“These changes fix [some of] the biggest issues I had with the bill, [but it still has many, many issues.]


Alan Kyle is a tech policy professional available for hire in AI Governance, Trust & Safety, and Privacy.

Читайте на 123ru.net


Новости 24/7 DirectAdvert - доход для вашего сайта



Частные объявления в Вашем городе, в Вашем регионе и в России



Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. "123 Новости" — абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Smi24.net — облегчённая версия старейшего обозревателя новостей 123ru.net. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city

Психолог Роман Цветков: «Ментальное здоровье женщин зависит от нас, мужчин»

Один из крупнейших производителей колбас обанкротился в Санкт-Петербурге

Раскрутка Сайта. Раскрутка сайта Москва. SEO раскрутка сайта. Заказать раскрутку сайта. Раскрутка сайта ru.

Горожан приглашают на лекцию о создании системы навигации в столичной подземке

Музыкальные новости

Релиз трека. Релиз новой песни. Релиз сингла. Релиз Музыкального альбома.

«Европа Плюс» отправит 10 слушателей в Стамбул на концерт Thirty Seconds to Mars

Умные очки Meta* использовали для слежки за ничего не подозревающими людьми

Актриса из Новосибирска пришла на матч Динамо-Спартак с плакатом «Ищу мужа»

Новости России

ракетные катера

В Москве врачи спасли младенца с опухолью рта

В Москве пройдет выставка художников-акварелистов из России и Беларуси

Пять простых способов повысить уровень гормонов счастья: советы психолога

Экология в России и мире

Неделя осетинских пирогов в Москве

Благодарность за концерт

Раскрутка Сайта. Раскрутка сайта Москва. SEO раскрутка сайта. Заказать раскрутку сайта. Раскрутка сайта ru.

У гроба Вячеслава Добрынина встретились его дочь, и предположительно, внебрачный сын

Спорт в России и мире

Шанхай (ATP). 2-й круг. Медведев встретится с Сейботом Уайлдом, Циципас – с Нисикори, Шелтон поборется с Шаповаловым, Пол – с Фоньини

Кудерметова проиграла в финале парного турнира в Пекине

«Стали хуже после коронавируса»: Медведев неприличным образом привлёк внимание к проблеме с мячами

Даниил Медведев выходит в 4-й раунд ATP Шанхая после победы над Арнальди

Moscow.media

За теплый сезон на федеральных трассах во Владимирской области нанесли 922 км разметки

Беляевскую премию вручили за развитие ИТ и искусственного интеллекта

Арестован экс-депутат Тверской гордумы за посредничество во взятке

Умные очки Meta* использовали для слежки за ничего не подозревающими людьми











Топ новостей на этот час

Rss.plus






“Костры рябин” возвестили о приближающемся резком похолодании в Ростове

Пять простых способов повысить уровень гормонов счастья: советы психолога

Мари Краймбрери, Клава Кока, bearwolf и не только! Like FM устраивает звездный девичник

От Москвы до самых до окраин-4