OpenAI Forms Safety Committee to Oversee Future AI Models

OpenAI Forms Safety Committee to Oversee Future AI Models

OpenAI has indeed established an Interim Safety and Security Committee, a move coming amidst heightened debate over the risks associated with advanced artificial intelligence and recent high-profile departures. This committee's primary mandate is to oversee and provide recommendations to OpenAI's board on critical safety and security aspects concerning the development and deployment of its next-generation AI models, including the potential successor to GPT-4. The committee comprises key OpenAI figures: CEO Sam Altman, alongside independent board members Bret Taylor (Board Chair), Adam D'Angelo, and Nicole Seligman. It is also empowered to consult external experts. This initiative follows the significant departures of co-founder and Chief Scientist Ilya Sutskever, and Jan Leike, who co-led the superalignment team. Both had voiced concerns that safety considerations at OpenAI were being overshadowed by the push for increasingly powerful AI capabilities. Leike specifically mentioned his team had been "sailing against the wind" for resources. The committee has a 90-day timeframe to conduct its initial review and present its findings to the board, aiming to reinforce confidence in OpenAI's approach to AI risk management.

« Back to News List