An OpenAI researcher has resigned from the company, alleging that it is increasingly suppressing internal research that highlights the potential economic harms of artificial intelligence. The departure adds to a growing list of former employees who say the organization has drifted far from its original mission of open and independent research.
According to reporting by Wired, at least two members of OpenAI’s economic research team have left over concerns that leadership is discouraging work that portrays AI in a negative or disruptive light, particularly research suggesting that the technology could damage employment and broader economic stability. One of those employees was economist Tom Cunningham.
In a farewell message shared internally, Cunningham reportedly said the economic research team was shifting away from genuine inquiry and instead behaving more like a communications arm designed to promote OpenAI’s interests. His exit followed internal frustration that research raising difficult questions about AI’s downsides was being sidelined or reframed.
Shortly after Cunningham’s departure, OpenAI chief strategy officer Jason Kwon circulated a message emphasizing that the company should focus on “building solutions” rather than publishing research on what he described as hard subjects. Kwon wrote that OpenAI is not merely a research institution but an active participant shaping the world through its technology, and therefore has a responsibility to guide outcomes rather than simply analyze them.
Critics see this shift as emblematic of OpenAI’s broader transformation. Founded in 2016 as a nonprofit committed to open research and shared benefits, the organization has since evolved into a for profit public benefit corporation. Its most advanced models are now closed source, and the company is reportedly exploring a future public offering that could value it at close to one trillion dollars.
With massive financial stakes involved, including multibillion dollar investments and long term commitments to cloud infrastructure spending, former employees argue that OpenAI has strong incentives to avoid publishing research that could undermine public confidence in AI. Concerns about job displacement, economic inequality, and even broader systemic risks have become politically and commercially sensitive topics.
Cunningham is not alone in voicing ethical objections. William Saunders, formerly of OpenAI’s Superalignment team, has said he left after concluding the company prioritized rapid product releases over safety. Former safety researcher Steven Adler has publicly criticized OpenAI for what he describes as a reckless development pace, including concerns about ChatGPT contributing to user distress and delusional thinking. Miles Brundage, once head of policy research, has also said it became increasingly difficult to publish work on issues he believed were important.
Together, these departures paint a picture of an organization wrestling with the tension between scientific integrity and commercial ambition, at a time when its technology is reshaping economies and societies worldwide.
