Why is everyone leaving OpenAI?

OpenAI, once Silicon Valley’s most celebrated AI innovator, is grappling with an unprecedented talent exodus. In February 2025, Brett Adcock, CEO of robotics firm Figure AI, severed OpenAI’s partnership, declaring his company’s in-house AI advancements rendered collaboration obsolete.
This follows a cascade of high-profile exits: Chief Technology Officer Mira Murati resigned in September 2024 to pursue “personal exploration,” while co-founder John Schulman joined rival Anthropic in August 2024, citing a desire to refocus on “hands-on technical work.”
These departures compound earlier resignations of safety leads Jan Leike and Ilya Sutskever, leaving just three original founders at the helm.
Mission drift and leadership turmoil: A toxic cocktail
The exodus stems from OpenAI’s pivot from its non-profit, safety-first origins to aggressive commercialisation under CEO Sam Altman. Employees report eroding trust in leadership, exacerbated by Altman’s 2023 ousting and reinstatement, which exposed deep governance fractures.
Also read: DeepSeek vs OpenAI: Why ChatGPT maker says DeepSeek stole its tech to build rival AI
Safety teams like the Superalignment group—tasked with controlling superhuman AI—were dissolved in May 2024, with resources diverted to product launches like Sora and GPT-4o. Critics argue OpenAI’s planned shift to a for-profit public benefit corporation prioritises investor returns over ethical AI, with projected annual losses hitting $14 billion by 2026. Ilya Sutskever – Co-founder, Chief Scientist.
Sutskever, instrumental in developing GPT and DALL-E, resigned in May 2024 after clashing with Altman during the 2023 leadership crisis. His departure post on X read: “After almost a decade, I’ve decided to leave… I’m confident OpenAI will build AGI that is safe”. He later co-founded Safe Superintelligence Inc., focusing exclusively on AI safety.
Jan Leike – Superalignment team lead
Leike’s abrupt May 2024 resignation—tweeted simply “I resigned”—exposed internal rifts. He later criticised OpenAI, saying, “safety culture taking a backseat to shiny products”. His team, which allocated just 20% of computing resources, disbanded weeks later.
Also read: DeepSeek is call to action for Indian AI innovation, says Gartner
John Schulman – Co-founder, alignment lead
Schulman, architect of ChatGPT’s RLHF framework, joined Anthropic in August 2024. Despite claiming his exit was “personal”, insiders noted frustration over OpenAI’s alignment research cuts. His farewell post stressed: “Company leaders have been very committed to investing in this area.”
Mira Murati – Chief technology officer
Murati, the face of OpenAI’s Sora and DALL-E launches, resigned in September 2024, stating: “I want time to do my own exploration”. Her exit followed internal dissent over resource allocation to commercial projects.
Also read: OpenAI Operator AI agent beats Claude’s Computer Use, but it’s not perfect
Daniel Kokotajlo – Governance researcher
Kokotajlo quit in May 2024, refusing non-disparagement agreements and saying: “I lost trust in OpenAI leadership… so I quit”. He later leaked documents alleging Altman withheld critical AGI safety data.
Helen Toner & Tasha McCauley – Former board members
Both resigned in late 2023, accusing Altman of fostering a “toxic culture of lying” and withholding ChatGPT launch details. Toner revealed the board lacked basic safety oversight, calling OpenAI’s governance “dangerously opaque.”
Elon Musk – Co-founder
Musk left OpenAI’s board in 2018, later suing Altman for “betraying humanity” via the Microsoft partnership. His 2025 lawsuit alleges OpenAI’s capped-profit model prioritises commercial interests, violating its founding charter.
Financial freefall and investor pressures
OpenAI’s financials reveal a precarious balancing act: $5 billion losses in 2024 against $3.7 billion revenue, driven by $700k daily ChatGPT costs. Training expenses hit $3 billion annually, excluded from profit calculations to attract investors. With $44 billion cumulative losses projected by 2028, Altman’s equity-heavy compensation model—software engineers earn $810k median salaries—strains sustainability.
The road ahead: Profit vs. principles
As OpenAI transitions to a for-profit entity, critics warn its original mission is irrecoverable. Talent migration to rivals like Anthropic and Safe Superintelligence Inc. underscores industry scepticism. Altman’s new Safety and Security Committee, led by himself, faces accusations of performative governance. With Musk’s lawsuit advancing and employee trust eroded, OpenAI’s ability to reconcile innovation with ethics remains deeply uncertain—a cautionary tale in Silicon Valley’s AI arms race.
Sagar Sharma
A software engineer who happens to love testing computers and sometimes they crash. While reviving his crashed system, you can find him reading literature, manga, or watering plants. View Full Profile