OpenAI is turning its Safety and Security Committee into an independent “Board oversight committee” that has the authority to delay model launches over safety concerns, according to an OpenAI blog post. The committee made the recommendation to make the independent board after a recent 90-day review of OpenAI’s “safety and security-related processes and safeguards.”
The committee, which is chaired by Zico Kolter and includes Adam D’Angelo, Paul Nakasone, and Nicole Seligman, will “be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed,” OpenAI says. OpenAI’s full board of directors will also receive “periodic briefings” on “safety and security matters.”
The members of OpenAI’s safety committee are also members of the company’s broader board of directors, so it’s unclear exactly how independent the committee actually is or how that independence is structured. (CEO Sam Altman was previously on the committee, but isn’t anymore.) We’ve asked OpenAI for comment.
By establishing an independent safety board, it appears OpenAI is taking a somewhat similar approach as Meta’s Oversight Board, which reviews some of Meta’s content policy decisions and can make rulings that Meta has to follow. None of the Oversight Board’s members are on Meta’s board of directors.
The review by OpenAI’s Safety and Security Committee also helped “additional opportunities for industry collaboration and information sharing to advance the security of the AI industry.” The company also says it will look for “more ways to share and explain our safety work” and for “more opportunities for independent testing of our systems.”
Update, September 16th: Added that Sam Altman is no longer on the committee.
Services Marketplace – Listings, Bookings & Reviews