OpenAI CEO Sam Altman isn’t part of company’s safety committee anymore, here’s why

Updated on 17-Sep-2024
HIGHLIGHTS

OpenAI has made some major changes to its safety and security practices, and one of the key updates is that CEO Sam Altman is no longer part of the company’s Safety and Security Committee.

The committee is now becoming an independent board oversight group that will focus solely on safety and security issues.

The Safety and Security Committee will now be led by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University.

OpenAI has made some major changes to its safety and security practices, and one of the key updates is that CEO Sam Altman is no longer part of the company’s Safety and Security Committee. The committee is now becoming an independent board oversight group that will focus solely on safety and security issues. This shift is aimed at strengthening OpenAI’s safety measures as the company continues to develop more advanced AI models. 

The new Safety and Security Committee will be led by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University. Other members include CEO and co-founder of Quora Adam D’Angelo, retired US Army General Paul Nakasone, and  former EVP and General Counsel of Sony Corporation Nicole Seligman. The committee will oversee safety evaluations, model launches, and security processes, ensuring OpenAI’s models are safe before they’re released. 

Also read: Maybe they shouldn’t have been there in first place: OpenAI CTO’s bizarre statement on AI replacing jobs

The committee will also have the authority to delay the launch of any model if safety concerns arise. OpenAI’s leadership team will regularly brief the committee on safety evaluations and model monitoring, allowing the committee to make informed decisions about the release of new AI systems. 

In addition to setting up independent governance, OpenAI is enhancing its security measures. This includes improving cybersecurity practices and increasing staffing to ensure around-the-clock security. The company is also considering creating an Information Sharing and Analysis Center (ISAC) to boost collaboration within the AI industry, allowing organizations to share information about threats and improve overall security.

Also read: OpenAI unveils new AI model called o1 aka Strawberry: Here is how it is different

Another key update is OpenAI’s commitment to being more transparent about its safety work. The company will continue publishing system cards, which detail the capabilities, risks, and safety measures for each model. 

Finally, OpenAI is collaborating with external organizations, including government labs, to strengthen AI safety research and develop industry-wide standards. By unifying its safety frameworks, OpenAI hopes to better manage the risks associated with its increasingly powerful AI models.

Ayushi Jain

Tech news writer by day, BGMI player by night. Combining my passion for tech and gaming to bring you the latest in both worlds.

Connect On :