If you followed the Sam Altman drama at OpenAI a few weeks ago, you might have noticed an intriguing development concerning ChatGPT and other AI products at the company. OpenAI co-founder and board member Ilya Sutskever was seen as the "bad guy" who masterminded the Altman firing, at least initially.
Sutskever then switched sides abruptly, joining the overwhelming majority of OpenAI employees who demanded that the board rehire Altman.
The board then rehired Altman as CEO and changed the roster of the board. Sutskever was gone from the board, and Altman's comments about the OpenAI co-founder made it seem like Sutskever's days were numbered. The brilliant AI scientist leaving OpenAI seemed like a real possibility, and a dangerous one for the development of safe AI.
It turns out that concerns Ilya Sutskever might be leaving OpenAI might be unwarranted. Or they might be right on the money. Whatever the case, Sutskever has been working on a big multi-year project at OpenAI to develop superalignment in the past few months. That's the technology that will prevent the smarter-than-human, post-AGI, superintelligent AI of the future from going rogue.
Ilya Sutskever and Jan Leike announced in July they're leading the superalignment efforts at OpenAI. They'll use some 20% of OpenAI's current compute capacity over four years to ensure that superalignment is successful. Now, the first results are here, and they're promising.
The post OpenAI’s plan to prevent superintelligent AI from going rogue might actually work appeared first on BGR.
Today's Top Deals
Trending Right Now:
OpenAI’s plan to prevent superintelligent AI from going rogue might actually work originally appeared on BGR.com on Fri, 15 Dec 2023 at 13:50:00 EDT. Please see our terms for use of feeds.