Добавить новость

Песков отреагировал на данные WSJ о российской «тайной операции» со взрывчаткой в ЕС

Неизвестные с топорами разгромили галерею на «Винзаводе»

Человек погиб в результате пожара в жилом доме в Башкирии

Дептранс: движение по Тверскому бульвару восстановлено после пожара



News in English


Новости сегодня

Новости от TheMoneytizer

California’s Draft AI Law Would Protect More than Just People

Few places in the world have more to gain from a flourishing AI industry than California. Few also have more to lose if the public’s trust in the industry were suddenly shattered.

In May, the California Senate passed SB 1047, a piece of AI safety legislation, in a vote of 32 to one, helping ensure the safe development of large-scale AI systems through clear, predictable, common-sense safety standards. The bill is now slated for a state assembly vote this week and, if signed into law by Governor Gavin Newsom, would represent a significant step in protecting California citizens and the state’s burgeoning AI industry from malicious use.

[time-brightcove not-tgx=”true”]

Late Monday, Elon Musk shocked many by announcing his support for the bill in a post on X. “This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” he wrote. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public.”

The post came days after I spoke with Musk about SB 1047. Unlike other corporate leaders who often waver, consulting their PR teams and lawyers before taking a stance on safety legislation, Musk was different. After I outlined the importance of the bill, he requested to review its text to ensure its fairness and lack of potential for abuse. The next day he came out in support. This quick decision-making process is a testament to Musk’s long-standing advocacy for responsible AI regulation.

Last winter, Senator Scott Weiner, the bill’s creator, reached out to the Center for AI Safety (CAIS) Action Fund for technical suggestions and cosponsorship. As CAIS’s founder, my commitment to transformative technologies impacting public safety is our mission’s cornerstone. To preserve innovation, we must anticipate potential pitfalls, because an ounce of prevention is worth a pound of cure. Recognizing SB 1047’s groundbreaking nature, we were thrilled to help and have advocated for its adoption ever since.

Read More: Exclusive: California Bill Proposes Regulating AI at State Level

Targeted at the most advanced AI models, it will require large companies to test for hazards, implement safeguards, ensure shutdown capabilities, protect whistleblowers, and manage risks. These measures aim to prevent cyberattacks on critical infrastructure, bioengineering of viruses, or other malicious activities with the potential to cause widespread destruction and mass casualties

Anthropic recently warned that AI risks could emerge in “as little as 1-3 years,” disputing critics who view safety concerns as imaginary. Of course, if these risks are indeed fictitious, developers shouldn’t fear liability. Moreover, developers have pledged to tackle these issues, aligning with President Joe Biden’s recent executive order, reaffirmed at the 2024 AI Seoul Summit.

Enforcement is lean by design, allowing California’s Attorney General to act only in extreme cases. There are no licensing requirements for new models, nor does it punish honest mistakes or criminalize open sourcing—the practice of making software source code freely available. It wasn’t drafted by Big Tech or those focused on distant future scenarios. The bill aims to prevent frontier labs from neglecting caution and critical safeguards in their rush to release the most capable models.


Like most AI safety researchers, I am in large part driven by a belief in its immense potential to benefit society, and deeply concerned about preserving that potential. As a global leader in AI, California is too. This shared concern is why state politicians and AI safety researchers are enthusiastic about SB 1047, as history tells us that a major disaster, like the nuclear one at Three Mile Island on March 28, 1979, could set a burgeoning industry back decades.

Regulatory bodies responded to the partial nuclear meltdown by overhauling nuclear safety standards and protocols. These changes increased the operational costs and complexity of running nuclear plants, as operators invested in new safety systems and complied with rigorous oversight. The regulatory challenges made nuclear energy less appealing, halting its expansion over the next 30 years.

Three Mile Island led to a greater dependence on coal, oil, and natural gas. It is often argued that this was a significant lost opportunity to advance toward a more sustainable and efficient global energy infrastructure. While it remains uncertain whether stricter regulations could have averted the incident, it is clear that a single event can profoundly impact public perception, stifling the long-term potential of an entire industry.

Some people will view any government action on industry with suspicion, considering it inherently detrimental to business, innovation, and a state or country’s competitive edge. Three Mile Island demonstrates this perspective is short-sighted, as measures to reduce the chances of a disaster are often in the long-term interest of emerging industries. It is also not the only cautionary tale for the AI industry.

When social media platforms first emerged, they were largely met with enthusiasm and optimism. A 2010 Pew Research Center survey found that 67% of American adults who used social media believed it had a mostly positive impact. Futurist Brian Solis captured this ethos when he proclaimed, “Social media is the new way to communicate, the new way to build relationships, the new way to build businesses, and the new way to build a better world.”

He was three-fourths correct.

Driven by concerns over privacy breaches, misinformation, and mental health impacts, public perception of social media has flipped, with 64% of Americans viewing it negatively. Scandals like Cambridge Analytica eroded trust, while fake news and polarizing content highlighted social media’s role in societal division. A Royal Society for Public Health study showed 70% of young people experienced cyberbullying, with 91% of 16-24-year-olds stating social media harms their mental wellbeing. Users and policymakers around the globe are increasingly vocal about needing stricter regulations and greater accountability from social media companies.

This did not happen because social media companies are uniquely evil. Like other emerging industries, the early days were a “wild west” where companies rushed to dominate a burgeoning market and government regulation was lacking. Platforms with addictive, often harmful content thrived, and we are now all paying the price. The companies—increasingly mistrusted by consumers and in the crosshairs of regulators, legislators, and courts—included.

The optimism surrounding social media wasn’t misplaced. The technology did have the potential to break down geographical barriers and foster a sense of global community, democratize information, and facilitate positive social movements. As the author Erik Qualman warned, “We don’t have a choice on whether we do social media, the question is how well we do it.”

The lost potential of social media and nuclear energy was tragic, but it’s nothing compared to squandering AI’s potential. Smart legislation like SB 1047 is our best tool for preventing this while protecting innovation and competition.

The history of technological regulation showcases our capacity for foresight and adaptability. When railroads transformed 19th-century transportation, governments standardized track gauges, signaling, and safety protocols. The advent of electricity led to codes and standards preventing fires and electrocutions. The automobile revolution necessitated traffic laws and safety measures like seat belts and airbags. In aviation, bodies like the FAA established rigorous safety standards, making flying the safest form of transportation.

History can only provide us with lessons. Whether to heed them is up to us.

Читайте на 123ru.net


Новости 24/7 DirectAdvert - доход для вашего сайта



Частные объявления в Вашем городе, в Вашем регионе и в России



Smi24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. "123 Новости" — абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Smi24.net — облегчённая версия старейшего обозревателя новостей 123ru.net. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Smi24.net — живые новости в живом эфире! Быстрый поиск от Smi24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.




Новости от наших партнёров в Вашем городе

Ria.city

Ребенок пострадал в ДТП с электробусом и авто в Сигнальном проезде

AP: Трамп побеждает Харрис в ключевом штате Пенсильвания

Из-за сломавшихся светофоров Екатеринбург встал в пробки

Пустила пожить нелегала: пенсионерке из Курагино грозит тюремный срок

Музыкальные новости

Nowhere House 1.1.18

Более 140 бригад «Смоленскэнерго» восстанавливают электроснабжение потребителей региона после новой волны непогоды

Легко устроились // Застройщики наращивают ввод объектов light industrial

«Торпедо» одолело московское «Динамо» благодаря голу Мисникова в буллитной серии

Новости России

Несколько авиарейсов задержали из-за погоды в Оренбурге

Народные приметы. Что нельзя делать 6 ноября, чтобы не остаться без денег и счастья?

Психосоматика: вскрыть проблему и найти продуктивные способы лечения

Два человека пострадали в ДТП с электробусом на северо-востоке Москвы

Экология в России и мире

Литературные пристрастия Виктории и Дэвида Бекхэмов: супруги читают триллеры одного автора

Интересные каналы в Telegram. Лучшие каналы в Telegram. Каталог каналов Telegram. Новостные каналы в Telegram. Топ каналов Telegram. Telegram каналы новости.

Джиган, Artik & Asti и NILETTO спели о худи, а Дина Саева стала новым артистом: в Москве прошел BRUNCH Rocket Group

10 самых опасных продуктов, которые есть в каждом холодильнике

Спорт в России и мире

Рыбакина заявила об усталости после второго поражения на Итоговом турнире WTA

Неймар назвал белорусскую теннисистку Арину Соболенко королевой

В России обесценили матч Елены Рыбакиной с первой ракеткой мира

Соболенко досрочно пробилась в плей-офф Итогового WTA. А Рыбакина уже не выйдет из группы

Moscow.media

Уважаемые коллеги! Дорогие друзья! Братство спасателей поздравляет вас с важным государственным праздником – Днем народного единства!

ТСД SAOTRON RT41 GUN: практичный, производительный, надёжный

Прогулка на ВДНХ

Заместитель управляющего Отделением Фонда пенсионного и социального страхования Российской Федерации по г. Москве и Московской области Алексей Путин: «Клиентоцентричность - наш приоритет»











Топ новостей на этот час

Rss.plus






AP: Трамп побеждает Харрис в ключевом штате Пенсильвания

ЕСЛИ ХАРРИС ДИСКВАЛИФИЦИРУЮТ, А ТРАМПА ПОДМЕНИЛИ, КТО БУДЕТ ПРАВИТЬ В США? Дональд Трамп, Владимир Путин, выборы Президента Америки. Россия, США, Европа могут улучшить отношения и здоровье общества?!

Из-за сломавшихся светофоров Екатеринбург встал в пробки

Ефимов: город предоставил инвестору участок во Внукове для реализации МаИП