Former OpenAI researcher speaks out: Here’s why he was fired
A former OpenAI researcher spoke openly about how and why he was fired.
Leopold Aschenbrenner was part of OpenAI's superalignment team
He said HR warned him after he shared a memo about OpenAI's security with board members.
A former OpenAI researcher spoke openly about how he caused some controversy by writing and sharing some documents concerning safety within the company, leading to his eventual dismissal.
Read along to know the whole story.
Also read: Manager’s efforts to replace employees with AI boomerangs: Here’s what happened
Leopold Aschenbrenner, who graduated from Columbia University at the age of 19, as stated on his LinkedIn profile, was part of OpenAI’s superalignment team before reportedly being terminated for alleged leaking in April.
He discussed his experience in a recent interview with podcaster Dwarkesh Patel, reports Business Insider.
Also read: World’s first AI Beauty Pageant: List of 10 finalists, judging criteria & more
Aschenbrenner revealed that he wrote and distributed a memo following a significant security event, which he did not elaborate on during the interview. He shared this memo with a few members of OpenAI’s board.
In the memo, he expressed concerns about the company’s security measures being “egregiously insufficient” in safeguarding against the theft of “key algorithmic secrets from foreign actors.” He had previously circulated the memo within OpenAI, receiving mostly positive feedback on its usefulness, he added.
Later, Aschenbrenner received a warning from HR regarding the memo, where he was told it was “racist” and “unconstructive” to express concerns about China Communist Party espionage. Subsequently, an OpenAI lawyer questioned him about his perspectives on AI and AGI, as well as the loyalty of Aschenbrenner and his superalignment team to the company.
Aschenbrenner also claimed the company then went through his OpenAI digital artifacts.
Shortly after, he was fired, with OpenAI alleging that he had leaked confidential information, was not cooperative during the investigation, and cited his prior warning from HR regarding the memo shared with board members.
Aschenbrenner said he shared a document about preparing for AGI (artificial general intelligence) with three outside researchers for feedback. He reviewed the document for sensitive information before sharing it, which he believed was normal at OpenAI.
An OpenAI spokesperson said that Aschenbrenner’s internal concerns and discussions with the board didn’t lead to his firing. They also disagreed with many of his claims about their work.
Ayushi Jain
Tech news writer by day, BGMI player by night. Combining my passion for tech and gaming to bring you the latest in both worlds. View Full Profile