TechKey departures stir uncertainty over AI safety at OpenAI

Key departures stir uncertainty over AI safety at OpenAI

Jan Leike and Ilya Sutskever left OpenAI due to issues with access to resources for their team responsible for overseeing "superintelligent" AI systems. Despite the team's dissolution, researchers in various departments will continue their tasks, which raises concerns about the future safety of AI at OpenAI.

Open AI logo on a smartphone
Open AI logo on a smartphone
Images source: © Unsplash | Levart_Photographer

19 May 2024 18:14

At OpenAI, the team responsible for developing and overseeing "superintelligent" AI systems, known as Superalignment, encountered significant problems accessing the promised resources. Although they were supposed to be given one-fifth of the company’s computing resources, their requests were often denied, hindering project implementation. These and other issues led to the resignation of several team members this week. One of the departures was its co-leader, Jan Leike, a former DeepMind researcher. He revealed that his departure from the company was due to disagreements over OpenAI's priorities, particularly regarding what he saw as insufficient preparations for introducing subsequent AI model generations.

Departure of key people

In addition to Leike, co-founder Ilya Sutskever, who was part of the former board of directors, also left OpenAI. The reason was their conflict with CEO Sam Altman. Board members were dissatisfied with Altman, who ultimately returned to his position. In response to these events, Altman wrote on platform X that the company still has a lot of work ahead but that these tasks will move forward with total commitment. Altman was supported in his words by AI co-founder Greg Brockman, who emphasised the need to pay even more attention to safety and process efficiency.

Although the Superalignment team has effectively ceased to exist, a group of researchers from various departments of the company is to continue its activities. This raises concerns about whether OpenAI will still be equally focused on safety issues in AI development.

Do staffing changes herald a shift in priorities?

According to TechCrunch sources, the situation at OpenAI shows a shift in priorities from the safe development of superintelligent AI to faster product releases to market. This change has been criticised by former Superalignment team members, who stressed the importance of a responsible approach to AI. The future of AI safety at OpenAI remains an open question as the company tries to balance innovation with responsibility.

Will Jan Leike's and Ilya Sutskever's departures and the dissolution of the Superalignment team affect project implementation at OpenAI?

The departure of Jan Leike and Ilya Sutskever and the dissolution of the Superalignment team at OpenAI may affect the pace of the company's project implementation and its long-term strategy in AI safety. Both scientists were key figures overseeing the development of superintelligent AI systems. Leike and Sutskever left the company due to differences in priorities, which may result in changes in the direction and pace of AI safety research.

Jakub Pachocki, the new chief scientist, will take over some of Leike’s and Sutskever’s duties. He is considered one of the brightest minds of his generation, which gives hope for OpenAI's further success in artificial intelligence. However, the division of Superalignment team tasks among various company departments raises concerns about the effectiveness of future safety-related activities.

Related content
© Daily Wrap
·

Downloading, reproduction, storage, or any other use of content available on this website—regardless of its nature and form of expression (in particular, but not limited to verbal, verbal-musical, musical, audiovisual, audio, textual, graphic, and the data and information contained therein, databases and the data contained therein) and its form (e.g., literary, journalistic, scientific, cartographic, computer programs, visual arts, photographic)—requires prior and explicit consent from Wirtualna Polska Media Spółka Akcyjna, headquartered in Warsaw, the owner of this website, regardless of the method of exploration and the technique used (manual or automated, including the use of machine learning or artificial intelligence programs). The above restriction does not apply solely to facilitate their search by internet search engines and uses within contractual relations or permitted use as specified by applicable law.Detailed information regarding this notice can be found  here.