The first StrictlyVC of 2026 hits SF on April 30. Tickets are going fast. Register now.
Buy one Disrupt pass, and get the second at 50% off. Ends May 8. Register now.
TechCrunch Desktop Logo TechCrunch Mobile Logo Latest Startups Venture Apple Security AI Apps Events Podcasts Newsletters Search Submit Site Search Toggle Mega Menu Toggle Topics Latest
Elon Musk’s lawsuit is putting OpenAI’s safety record under the microscope Tim Fernholz 12:21 PM PDT · May 7, 2026 Elon Musk’s legal effort to dismantle OpenAI may hinge on how its for-profit subsidiary enhances or detracts from the frontier lab’s founding mission of ensuring that humanity benefits from artificial general intelligence.
On Thursday, a federal court in Oakland, California, heard a former employee and board member say the company’s efforts to push AI products into the marketplace compromised its commitment to AI safety.
Rosie Campbell joined the company’s AGI readiness team in 2021, and she left OpenAI in 2024 after her team was disbanded. Another safety-focused team, the Super Alignment team, was shut down in the same time period.
“When I joined, it was very research-focused and common for people to talk about AGI and safety issues,” she testified. “Over time it became more like a product-focused organization.”
Under cross-examination, Campbell acknowledged that significant funding was likely necessary for the lab’s goal of building AGI but said creating a super-intelligent computer model without the right safety measures in place wouldn’t fit with the mission of the organization she originally joined.
Campbell pointed to an incident where Microsoft deployed a version of the company’s GPT-4 model in India through its Bing search engine before the model had been evaluated by the company’s Deployment Safety Board (DSB). The model itself did not present a huge risk, she said, but the company needed “to set strong precedents as the technology gets more powerful. We want to have good safety processes in place we know are being followed reliably.”
OpenAI’s attorneys also had Campbell admit that in her “speculative opinion,” OpenAI’s safety approach is superior to that at xAI, the AI company that Musk founded that was acquired by SpaceX earlier this year.
Techcrunch event This Week Only: Buy one pass, get the second at 50% off Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register before May 8 to bring a +1 at half the cost. This Week Only: Buy one pass, get the second at 50% off Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register before May 8 to bring a +1 at half the cost. San Francisco, CA | October 13-15, 2026 REGISTER NOW OpenAI releases evaluations of its models and shares a safety framework publicly, but the company declined to comment on its current approach to AGI alignment. Dylan Scandinaro, its current head of preparedness, was hired from Anthropic in February. Altman said the hire would let him “sleep better tonight.”
The deployment of GPT-4 in India, however, was one of the red flags that led OpenAI’s non-profit board to briefly fire CEO Sam Altman in 2023. That incident took place after employees, including then-chief scientist Ilya Sutskever and then-CTO Mira Murati, complained about Altman’s conflict-averse management style. Tasha McCauley, a member of the board at the time, testified about concerns that Altman was not forthcoming enough with the board for its unusual structure to function.
McCauley also discussed a widely reported pattern of Altman misleading the board. Notably, Altman lied to another board member about McCauley’s intention to remove Helen Toner, a third board member who published a white paper that included some implied criticism of OpenAI’s safety policy. Altman also failed to inform the board about the decision to launch ChatGPT publicly, and members were concerned about his lack of disclosure of potential conflicts of interest.
“We are a non-profit board and our mandate was to be able to oversee the for-profit underneath us,” McCauley told the court. “Our primary way to do that was being called into question. We did not have a high degree of confidence at all to trust that the information being conveyed to us allowed us to make decisions in an informed way.”
However, the decision to boot Altman came at the same time as a tender offer to the company’s employees. McCauley said that when OpenAI’s staff started to side with Altman and Microsoft worked to restore the status quo, the board ultimately reversed course, with the members opposed to Altman stepping down.
___________________________________________________________________________________________________________
-- --
PLEASE LIKE IF YOU FOUND THIS HELPFUL TO SUPPORT OUR FORUM.
