Добавить новость

Сергей Собянин. Главное за день

«У меня секс каждый день»: 78-летний Юрий Антонов рассказал о личной жизни

Начинающих сельхозпредпринимателей обучит бизнесу государство

Мигрант ударил ножом школьника за "снежок"

News in English

Новости сегодня

Новости от TheMoneytizer

The Man Who Tried to Overthrow Sam Altman

What was Ilya Sutskever thinking?

Ilya Sutskever, bless his heart. Until recently, to the extent that Sutskever was known at all, it was as a brilliant artificial-intelligence researcher. He was the star student who helped Geoffrey Hinton, one of the “godfathers of AI,” kick off the so-called deep-learning revolution. In 2015, after a short stint at Google, Sutskever co-founded OpenAI, and eventually became its chief scientist; so important was he to the company’s success that Elon Musk has taken credit for recruiting him. (Sam Altman once showed me emails between himself and Sutskever suggesting otherwise.) Still, apart from niche podcast appearances, and the obligatory hour-plus back-and-forth with Lex Fridman, Sutskever didn’t have much of a public profile before this past weekend. Not like Altman, who has, over the past year, become the global face of AI.

On Thursday night, Sutskever set an extraordinary sequence of events into motion. According to a post on X by Greg Brockman, the former president of OpenAI and the former chair of its board, Sutskever texted Altman that night and asked if the two could talk the following day. Altman logged on to a Google Meet at the appointed time on Friday, and quickly learned that he’d been ambushed. Sutskever took on the role of Brutus, informing Altman that he was being fired. Half an hour later, Altman’s ouster was announced in terms so vague that for a few hours, anything from a sex scandal to a massive embezzlement scheme seemed possible.

I was surprised by these initial reports. While reporting a feature for The Atlantic last spring, I got to know Sutskever a bit, and he did not strike me as a man especially suited to coups. Altman, in contrast, was built for a knife fight in the technocapitalist mud. By Saturday afternoon, he had the backing of OpenAI’s major investors, including Microsoft, whose CEO, Satya Nadella, was reportedly furious that he’d received almost no notice of his firing. Altman also secured the support of the troops: More than 700 of OpenAI’s 770 employees have now signed a letter threatening to resign if he is not restored as chief executive. On top of these sources of leverage, Altman has an open offer from Nadella to start a new AI-research division at Microsoft. If OpenAI’s board proves obstinate, he can set up shop there and hire nearly every one of his former colleagues.

[From the September 2023 issue: Does Sam Altman know what he’s creating?]

As late as Sunday night, Sutskever was at OpenAI’s offices working on behalf of the board. But yesterday morning, the prospect of OpenAI’s imminent disintegration and, reportedly, an emotional plea from Anna Brockman—Sutskever officiated the Brockmans’ wedding—gave him second thoughts. “I deeply regret my participation in the board’s actions,” he wrote, in a post on X (formerly Twitter). “I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.” Later that day, in a bid to wish away the entire previous week, he joined his colleagues in signing the letter demanding Altman’s return.

Sutskever did not return a request for comment, and we don’t yet have a full account of what motivated him to take such dramatic action in the first place. Neither he nor his fellow board members have released a clear statement explaining themselves, and their vague communications have stressed that there was no single precipitating incident. Even so, some of the story is starting to fill out. Among many other colorful details, my colleagues Karen Hao and Charlie Warzel reported that the board was irked by Altman’s desire to quickly ship new products and models rather than slowing things down to emphasize safety. Others have said that their hand was forced, at least in part, by Altman’s extracurricular-fundraising efforts, which are said to have included talks with parties as diverse as Jony Ive, aspiring NVIDIA competitors, and investors from surveillance-happy autocratic regimes in the Middle East.

[Read: Inside the chaos at OpenAI]

This past April, during happier times for Sutskever, I met him at OpenAI’s headquarters in San Francisco’s Mission District. I liked him straightaway. He is a deep thinker, and although he sometimes strains for mystical profundity, he’s also quite funny. We met during a season of transition for him. He told me that he would soon be leading OpenAI’s alignment research—an effort focused on training AIs to behave nicely, before their analytical abilities transcend ours. It was important to get alignment right, he said, because superhuman AIs would be, in his charming phrase, the “final boss of humanity.”

Sutskever and I made a plan to talk a few months later. He’d already spent a great deal of time thinking about alignment, but he wanted to formulate a strategy. We spoke again in June, just weeks before OpenAI announced that his alignment work would be served by a large chunk of the company’s computing resources, some of which would be devoted to spinning up a new AI to help with the problem. During that second conversation, Sutsekever told me more about what he thought a hostile AI might look like in the future, and as the events of recent days have transpired, I have found myself thinking often of his description.

“The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,” Sutskever said. Although large language models, such as those that power ChatGPT, have come to define most people’s understanding of OpenAI, they were not initially the company’s focus. In 2016, the company’s founders were dazzled by AlphaGo, the AI that beat grandmasters at Go. They thought that game-playing AIs were the future. Even today, Sutskever remains haunted by the agentlike behavior of those that they built to play Dota 2, a multiplayer game of fantasy warfare. “They were localized to the video-game world” of fields, forts, and forests, he told me, but they played as a team and seemed to communicate by “telepathy,” skills that could potentially generalize to the real world. Watching them made him wonder what might be possible if many greater-than-human intelligences worked together.

In recent weeks, he may have seen what felt to him like disturbing glimpses of that future. According to reports, he was concerned that the custom GPTs that Altman announced on November 6 were a dangerous first step toward agentlike AIs. Back in June, Sutskever warned me that research into agents could eventually lead to the development of “an autonomous corporation” composed of hundreds, if not thousands, of AIs. Working together, they could be as powerful as 50 Apples or Googles, he said, adding that this would be “tremendous, unbelievably disruptive power.”

It makes a certain Freudian sense that the villain of Sutskever’s ultimate alignment horror story was a supersize Apple or Google. OpenAI’s founders have long been spooked by the tech giants. They started the company because they believed that advanced AI would be here sometime soon, and that because it would pose risks to humanity, it shouldn’t be developed inside a large, profit-motivated company. That ship may have sailed when OpenAI’s leadership, led by Altman, created a for-profit arm and eventually accepted more than $10 billion from Microsoft. But at least under that arrangement, the founders would still have some control. If they developed an AI that they felt was too dangerous to hand over, they could always destroy it before showing it to anyone.

Sutskever may have just vaporized that thin reed of protection. If Altman, Brockman, and the majority of OpenAI’s employees decamp to Microsoft, they may not enjoy any buffer of independence. If, on the other hand, Altman returns to OpenAI, and the company is more or less reconstituted, he and Microsoft will likely insist on a new governance structure or at least a new slate of board members. This time around, Microsoft will want to ensure that there are no further Friday-night surprises. In a terrible irony, Sutskever’s aborted coup may have made it more likely that a large, profit-driven conglomerate develops the first super-dangerous AI. At this point, the best he can hope for is that his story serves as an object lesson, a reminder that no corporate structure, no matter how well intended, can be trusted to ensure the safe development of AI.

Читайте на 123ru.net

Новости 24/7 DirectAdvert - доход для вашего сайта

Частные объявления в Вашем городе, в Вашем регионе и в России

Ru24.net — ежеминутные новости с ежедневным архивом. Только у нас — все главные новости дня без политической цензуры. "123 Новости" — абсолютно все точки зрения, трезвая аналитика, цивилизованные споры и обсуждения без взаимных обвинений и оскорблений. Помните, что не у всех точка зрения совпадает с Вашей. Уважайте мнение других, даже если Вы отстаиваете свой взгляд и свою позицию. Ru24.net — облегчённая версия старейшего обозревателя новостей 123ru.net. Мы не навязываем Вам своё видение, мы даём Вам срез событий дня без цензуры и без купюр. Новости, какие они есть —онлайн с поминутным архивом по всем городам и регионам России, Украины, Белоруссии и Абхазии. Ru24.net — живые новости в живом эфире! Быстрый поиск от Ru24.net — это не только возможность первым узнать, но и преимущество сообщить срочные новости мгновенно на любом языке мира и быть услышанным тут же. В любую минуту Вы можете добавить свою новость - здесь.

Новости от наших партнёров в Вашем городе


Голливуду не понадобились. Судьба актеров-мигрантов, уехавших из СССР


Прохор Шаляпин сдал безработную внучку Гурченко: «Мы часто с ней пьем вино»

Nihon Keizai: Япония стала зависима от выловленного в России краба

Музыкальные новости

В 26 регионах России ожидается снежный декабрь

Сбер анонсировал запуск нового аэропорта в Горно-Алтайске через три года

Обнародованы подробности о журналистке, задавшей вопрос Лаврову на саммите

Гидрометцентр перечислил регионы, которые завалит снегом в декабре 2023: полный список

Новости России

Значительный рост цен на такси фиксируют в Хабаровском крае

Синоптики предупредили москвичей о сильном снегопаде и гололедице 3 декабря

Актеры Юлия Пересильд и Михаил Тройник тайно сыграли свадьбу

Главное, чтобы костюмчик сидел. Эксперты объяснили цены на детские наряды

Экология в России и мире

«Что мы знаем о демократии»?

Куда идут российско-армянские отношения? - политолог Михаил Александров

У ТЦ "Саларис" сменится владелец

Участвовать в Телепрограммах канала DEN TB

Спорт в России и мире

Reuters: WTA не накажет теннисисток за участие в турнире в Санкт-Петербурге

WTA решила не наказывать теннисисток за участие в турнире в Петербурге

WTA пообещала не применять санкции к спортсменкам за участие в турнире в Петербурге

Потапова и Шевченко поженились в Санкт-Петербурге


Axenix и Tibbo Systems предложат рынку системы управления производством

Что за мем «Они же слишком basic». Видео про эмо с Дашей Каплан попала в тренд о чувстве превосходства

Что за функция Make It More в ChatGPT. В тренде генерируют самые европейские пейзажи и богатых айтишников

Министерство культуры и туризма Московской области и каршеринг BelkaCar представили новые маршруты в рамках спецтарифа «Навигатор»

Топ новостей на этот час


Мэр Москвы стал лауреатом высшей юридической премии "Юрист года"

Сергей Собянин. Главное за день

Сделайте меня губернатором — я буду губернатором; сделайте цензором — я буду цензором.

Мигрант ударил ножом школьника за "снежок"