OpenAI Staffer Quits, Alleging Company’s Economic Research Is Drifting Into AI Advocacy
OpenAI, a leading force in artificial intelligence development, is reportedly facing internal dissent regarding the objectivity of its economic research. Multiple sources suggest the company has become increasingly hesitant to publish findings that highlight the potentially negative economic impacts of AI, such as widespread job displacement, favoring instead a narrative that emphasizes the technology’s benefits and positive transformations. This perceived shift from rigorous, balanced research to a more advocacy-driven stance has reportedly led to the departure of at least two members of OpenAI’s economic research team in recent months.
One prominent departure is that of Tom Cunningham, a key employee who left the company entirely in September. Sources familiar with the situation indicate that Cunningham concluded it had become exceedingly difficult to conduct and publish high-quality, unbiased research. In a candid parting message shared internally, Cunningham reportedly articulated a growing tension within the team: the conflict between its mandate for rigorous analytical work and what he perceived as its emerging role as a de facto advocacy arm for OpenAI’s corporate interests. This internal communication underscored a fundamental concern that the pursuit of unvarnished truth was being overshadowed by the imperative to promote a favorable image of AI and, by extension, OpenAI’s products. Cunningham himself declined to comment on the matter when approached by WIRED, leaving his internal message as the primary, albeit indirect, testament to his reasons for leaving.

Following Cunningham’s departure, Jason Kwon, OpenAI’s chief strategy officer, addressed these mounting concerns in an internal memo, a copy of which was obtained by WIRED. Kwon’s message presented a nuanced perspective, arguing that OpenAI, as a responsible leader in the rapidly evolving AI sector, has a dual obligation. He contended that the company should not merely identify potential problems associated with AI but must also actively "build the solutions." On Slack, Kwon further elaborated, stating, "My POV on hard subjects is not that we shouldn’t talk about them. Rather, because we are not just a research institution, but also an actor in the world (the leading actor in fact) that puts the subject of inquiry (AI) into the world, we are expected to take agency for the outcomes." This statement suggests a corporate philosophy where the company’s pioneering role in AI development inherently links its research output to its responsibility for the technology’s real-world implications, implicitly justifying a more proactive, solution-oriented, and perhaps less purely critical, approach to its public-facing research.
In an official statement to WIRED, OpenAI spokesperson Rob Friedlander sought to counter the allegations, emphasizing the company’s commitment to robust economic analysis. Friedlander highlighted the hiring of Aaron Chatterji as OpenAI’s first chief economist last year and asserted that the scope of the company’s economic research has since expanded significantly. He stated, “The economic research team conducts rigorous analysis that helps OpenAI, policymakers, and the public understand how people are using AI and how it is shaping the broader economy, including where benefits are emerging and where societal impacts or disruptions may arise as the technology evolves.” This statement aims to project an image of comprehensive and balanced inquiry, acknowledging both potential benefits and societal disruptions.
This alleged internal shift in research focus comes at a pivotal moment for OpenAI. The company is rapidly solidifying its position as a central player in the global economy, forging multibillion-dollar partnerships with major corporations and governments worldwide. As the technology OpenAI develops continues to advance, experts widely believe it possesses the potential to fundamentally transform the nature of work. However, substantial questions persist regarding the timeline and ultimate extent of these changes, and crucially, their impact on individuals and global markets.
Historically, OpenAI has maintained a reputation for transparency, regularly releasing research since 2016 on how its systems could reshape labor markets and sharing data with external economists. A notable example is the 2023 co-publication of "GPTs Are GPTs," a widely cited paper that meticulously investigated which economic sectors were most vulnerable to automation. Yet, over the past year, two sources claim a discernible change: the company has become increasingly hesitant to release work that underscores the economic downsides of AI, such as job displacement, and has instead shown a clear preference for publishing more positive findings. This pattern suggests a deliberate editorial direction designed to shape public perception.
An outside economist, who previously collaborated with OpenAI, echoed these sentiments, alleging that the company is increasingly publishing research that consistently casts its technology in a favorable light. This economist, who requested anonymity to speak freely, pointed to recent publications as evidence. Earlier this week, for instance, OpenAI released a report based on a survey of enterprise users. The report claimed that the company’s AI products saved users an average of 40 to 60 minutes per day and projected "significant headroom" for increased AI adoption across various industries. While these findings might be accurate, the exclusive focus on positive outcomes reinforces the perception of a biased research agenda.
Concerns about OpenAI’s publishing practices are not entirely new. Miles Brundage, a former head of policy research at OpenAI, upon his departure in October 2024, expressed similar frustrations. He noted that due to the company’s increasingly high profile, it had become "hard for me to publish on all the topics that are important to me." While acknowledging that some constraints are expected within any organization, Brundage felt that OpenAI had become excessively restrictive, hindering the exploration of critical, albeit potentially uncomfortable, subjects.
Research Politics
The political implications of sharing "gloomy statistics" about AI’s potential economic impact are significant and could indeed complicate OpenAI’s meticulously cultivated public image. In the United States, the Trump administration has been a vocal champion of AI’s potential, with White House advisers actively pushing back against claims that the technology will lead to widespread job elimination. This issue resonates deeply with many Americans; a November survey from the Harvard Kennedy School’s Institute of Politics revealed that approximately 44 percent of young people in the US fear that AI will reduce job opportunities. In such a sensitive political and social climate, research highlighting job displacement could be perceived as undermining economic confidence and generating public anxiety, which powerful technology companies might seek to avoid.
Moreover, the current landscape grants leading AI labs an unusual degree of authority in self-reporting the risks and capabilities of the very technology they are aggressively developing and deploying. This self-regulatory model is fiercely protected by Silicon Valley, which has mounted extensive lobbying campaigns, including $100 million efforts, to resist proposed state-level AI regulations that could impose external constraints on the industry. In this context, controlling the narrative around AI’s economic impact becomes a strategic imperative, allowing companies like OpenAI to influence public and policy discourse without external oversight.
OpenAI’s allegedly cautious posture stands in stark contrast to that of its rival, Anthropic. Dario Amodei, Anthropic’s CEO, has repeatedly issued stark warnings, predicting that AI could automate up to half of entry-level white-collar jobs by 2030. Amodei explicitly frames these predictions as a necessary catalyst for public debate about the impending changes to the workforce, prioritizing transparency and proactive discussion over managing public perception. This confrontational approach, however, has drawn sharp criticism from the Trump administration. David Sacks, a White House special adviser for AI and crypto, publicly accused Anthropic of engaging in a “sophisticated regulatory capture strategy based on fear-mongering,” suggesting that their dire warnings are a tactic to influence future regulation.
Currently, OpenAI’s economic research efforts are overseen by Aaron Chatterji, who was instrumental in leading a significant September report detailing how people worldwide are using ChatGPT. Interestingly, Tom Cunningham, the departing staffer, is listed as an author on this very report, suggesting that his concerns may have developed even while contributing to published work. The timing of OpenAI’s report also raised eyebrows, as it was released months after Anthropic published a similar paper on how users engage with its chatbot, Claude, highlighting a competitive dynamic in shaping the public understanding of AI usage.
Sources further reveal that Chatterji reports directly to Chris Lehane, OpenAI’s chief global affairs officer. This reporting structure is highly significant, as it explicitly integrates the economic research team with the company’s broader political and policy strategy. Lehane’s background underscores this strategic alignment: he previously served at Airbnb, where he famously helped the company defeat Proposition F, a ballot measure in San Francisco that would have severely restricted its operations. Before that, Lehane was a special assistant counsel to former President Bill Clinton, earning a reputation as the “master of disaster” for his expertise in crisis management and strategic communications. Placing economic research under the purview of a seasoned political strategist like Lehane suggests that the team’s output is not merely an academic exercise but a critical component of OpenAI’s larger public relations and policy agenda, aiming to navigate and shape the complex political and economic landscape surrounding AI. This integration raises legitimate questions about the independence and ultimate purpose of the "rigorous analysis" that OpenAI claims to uphold, reinforcing the concerns articulated by departing staff members.










