In Short:
Last year, OpenAI introduced a team to prepare for the rise of superintelligent AI. Now, the team is dissolved after key members, including lead scientist Ilya Sutskever, left the company. This follows a governance crisis in November when CEO Sam Altman was fired but later reinstated. Several other researchers have also departed, raising concerns about OpenAI’s future research on AI risks.
OpenAI Dissolves Superalignment Team
In July last year, OpenAI announced the formation of a new research team to prepare for the advancement of supersmart artificial intelligence that could potentially surpass its creators. Ilya Sutskever, OpenAI’s chief scientist and cofounder, was named as the colead of this new team. The team was allocated 20 percent of the company’s computing power.
Breakup of the Superalignment Team
Recently, OpenAI has confirmed the dissolution of its “superalignment team.” This decision follows the exits of several researchers and the departure of Sutskever. Jan Leike, the other colead of the team, also resigned. The team’s work will now be integrated into other research efforts within OpenAI.
Key Departures and Controversy
Sutskever’s departure garnered attention as he was instrumental in the establishment of OpenAI and the development of ChatGPT. However, Sutskever was among the board members who removed CEO Sam Altman in a governance crisis last year.
After Sutskever’s departure, Leike also announced his resignation. Both have not publicly commented on their reasons for leaving.
Shakeup at OpenAI
The disbandment of the supralignment team is part of a larger restructuring within OpenAI following the governance crisis. Various researchers, including Aschenbrenner, Izmailov, and Saunders, have departed for different reasons.
Future Research Direction
OpenAI did not comment on the departures or the future of its long-term AI risk projects. Research on AI risks will now be overseen by John Schulman, who leads the team responsible for refining AI models post-training.