Scam Ads Are Flooding Social Media. These Former Meta Staffers Have a Plan
The digital landscape is increasingly marred by a relentless deluge of scam advertisements, a pervasive issue that has reached alarming proportions, prompting former high-ranking Meta executives to take a stand. This crisis, characterized by sophisticated tactics including deepfakes and AI-generated content, highlights a critical failure in platform accountability and transparency. Now, two former Meta staffers, Rob Leathern and Rob Goldman, are spearheading a new nonprofit, CollectiveMetrics.org, with a bold plan to bring much-needed clarity and measurement to the fight against online deception.
The severity of this problem was starkly illustrated in 2019 when Dutch billionaire TV producer John de Mol initiated a lawsuit against Facebook, alleging its failure to curb scammers from illicitly using his image in fraudulent advertisements. In response, Facebook dispatched Rob Leathern to Amsterdam. As the then-leader of the company’s business integrity unit, Leathern was a prominent figure in Meta’s (then Facebook’s) battle against deceptive advertising. He candidly acknowledged the formidable challenge, telling Reuters at the time, "The people who push these kinds of ads are persistent, they are well funded, and they are constantly evolving their deceptive tactics to get around our systems."
During his four-year tenure at Meta, Leathern was, in many respects, the public face of the company’s efforts to combat scam ads. His unit was specifically tasked with preventing malicious actors from exploiting Meta’s advertising products. Beyond his media engagements, Leathern was instrumental in driving transparency initiatives, including the groundbreaking Meta Ad Library – the industry’s first free and searchable repository of digital ads – and the implementation of identity verification protocols for political advertisers. His deep immersion in the problem provided him with an unparalleled understanding of its intricacies and the systemic challenges involved.
However, since his departure from Meta at the close of 2020, Leathern has observed with growing alarm the rapid evolution of scamming methodologies. Criminal enterprises have increasingly deployed advanced deepfake technology and utilized artificial intelligence to craft highly convincing and personalized scam advertisements. These AI-powered scams are not only more persuasive but also incredibly difficult to detect, often mimicking legitimate content or trusted personalities with uncanny accuracy. Leathern became particularly concerned as major social media platforms appeared to falter, failing to invest sufficiently in the necessary teams and cutting-edge technology required to counter these increasingly exploitative ads at the pace demanded by the threat.
"The technology and the progress has stagnated the last five years," Leathern stated in a recent interview, expressing profound disappointment. He added, "I also feel like we just don’t really know how bad it’s gotten or what the current state is. We don’t have objective ways of knowing." This lack of objective data underscores a critical blind spot, preventing both platforms and the public from truly grasping the scale and impact of the problem.
Driven by this pressing concern, Leathern has joined forces with Rob Goldman, Meta’s former vice president of ads, to establish CollectiveMetrics.org. This new nonprofit organization is dedicated to fostering greater transparency in digital advertising with the explicit aim of combating deceptive ads. Their primary objective is to leverage data and rigorous analysis to measure critical metrics, such as the actual prevalence of online scam ads. By doing so, they hope to "lift the veil" on the notoriously opaque ad systems that collectively generate hundreds of billions of dollars in revenue for tech giants like Meta. The implication is clear: without independent scrutiny, the true extent of the problem, and potentially, any disincentives for platforms to aggressively combat it, remains hidden.
This initiative comes at a time when financial losses attributed to online scams have escalated dramatically across the globe. The Global Anti-Scam Alliance (GASA), a leading organization that researches scam trends and includes advisory board members from Meta, Google, and other major platforms, estimates a staggering collective loss of at least a trillion dollars by victims last year alone. GASA’s 2025 Global State of Scams report further revealed that a concerning 23 percent of individuals have fallen victim to a scam. Compounding this issue, the report found that many victims refrain from reporting scams due to feelings of shame or simply not knowing where to turn. Among those who did report, more than a third were met with inaction, stating that "no action was taken by the platform after reporting it." This statistic highlights a severe breakdown in trust and accountability, leaving victims feeling abandoned.
Leathern stresses that it is currently impossible to ascertain the exact number of scam ads proliferating on platforms such as Facebook and YouTube. This critical information remains inaccessible to independent researchers, largely because these companies do not make their internal data available for external scrutiny. "I want there to be more transparency. I want third parties, researchers, academics, nonprofits, whoever, to be able to actually assess how good of a job these platforms are doing at stopping scams and fraud," Leathern asserted. He emphasized that the ultimate goal is to "move to actual measurement of the problem and help foster an understanding" that transcends corporate narratives.
As a foundational step towards achieving this transparency, CollectiveMetrics.org commissioned an online survey of 1,000 American adults. The survey aimed to gauge consumer perceptions of how effectively platforms are combating deepfakes and scam ads. The findings painted a grim picture: nearly half of all respondents (47 percent) believed that TikTok was performing a poor or very poor job – the highest negative rating among all platforms polled. Facebook and Instagram followed closely, with 38 percent of respondents rating Facebook’s efforts as poor or very poor, and 33 percent expressing the same sentiment for Instagram. The survey also highlighted a significant generational gap, with individuals over 55 holding an even more negative view. A striking 61 percent of this demographic rated TikTok’s performance as poor or very poor, while 47 percent and 43 percent said the same for Facebook and Instagram, respectively.
Leathern interpreted these consistently low ratings for TikTok and Meta’s products as indicative of a widespread negative perception among consumers regarding the companies’ anti-scam initiatives. "People seem quite more negative than I would have expected," he observed, underscoring the depth of public distrust. He further suggested that "there’s been a loss of institutional knowledge at some of these companies," contributing to the problem. "I just think we’re in for a hard time, and I don’t see the mechanisms in place for much accountability yet." (It is noted that Leathern’s wife currently works in product marketing at Meta, a disclosure to ensure full transparency).
In response to the criticism, Melanie Bosselait, a TikTok spokesperson, stated via email that the company’s Community Guidelines explicitly prohibit "attempts to scam, trick or defraud people." She also highlighted TikTok’s provision of educational resources, including an article titled "How We Fight Scams and Fraud on TikTok." Bosselait affirmed that TikTok employs a hybrid approach of automated and human systems to enforce its rules, regularly reviewing and strengthening these systems.
Meta spokesperson Daniel Roberts also defended the company’s efforts, asserting that Meta has significantly increased its investment in combating scams since Leathern’s departure. "We aggressively fight scams on our platforms, and as scammers have grown in sophistication in recent years, so have our efforts," Roberts declared in an emailed statement. He added, "In fact, since this former employee left Meta a half-decade ago, we have expanded our multi-layered approach to combatting scams by launching global awareness campaigns that help people spot scams, collaborating with cross-industry partners to disrupt these networks, and rolling out facial recognition technology to detect and remove celeb-bait ads." Roberts claimed that Meta has witnessed a more than 50 percent decline in user reports about scam ads since the summer of 2024 and has removed over 134 million scam ads this year. However, without independent verification, these figures remain difficult to contextualize and assess for true impact.
Despite Meta’s defensive posture, the company continues to face legal and public scrutiny. Australian billionaire Andrew Forrest is currently suing Meta in California, alleging that the company’s automated ad systems facilitated investment scammers in placing ads that impersonated him. In a court filing related to this case, Meta itself disclosed that it had hosted approximately 230,000 scam ads featuring Forrest’s likeness since 2019 – a figure that severely undermines its claims of effective prevention. Furthermore, an October report from the Tech Transparency Project revealed that Meta recently earned at least $49 million from scam advertisers who frequently utilized deepfakes of prominent public figures such as Donald Trump, Elon Musk, and Alexandria Ocasio-Cortez.
Leathern posited a provocative theory as to why scam ads remain so pervasive: companies might be hesitant to aggressively police them for fear that "too much good revenue will get flushed out if they are more aggressive about getting rid of the bad." This suggests a potential conflict of interest between platform profitability and user protection. Roberts, however, vehemently disagreed with this assessment. "We fight fraud and scams because people on our platforms don’t want this content, legitimate advertisers don’t want it, and we don’t either," he stated, adding, "That’s why we’re always looking for new ways to stop them and take them down."
CollectiveMetrics.org’s survey data unequivocally shows that consumers largely believe that both digital platforms and governments bear a significant responsibility for preventing scam ads. Yet, only 36 percent of respondents felt that digital platforms were doing a very or somewhat good job in fighting deepfakes and scam ads. "Consumers in the US definitely expect both tech companies and the government to help protect them from the potential negative effects of deepfakes," Leathern reiterated, highlighting the mismatch between public expectation and perceived platform performance. "And also they don’t feel like platforms are doing a great job yet in terms of preventing scams and deepfakes."
The call for government intervention is particularly strong, especially among older demographics. Just under 50 percent of respondents aged 18 to 54 considered it "very important" for the government to enact laws to combat deepfake ads. This sentiment was even more pronounced among those over 55, with 65 percent emphasizing the critical importance of such legislative action. Similarly, 67 percent of respondents aged 55 and older deemed it "very important for online platforms to prevent fraudulent ads," compared to 55 percent of those aged 54 and under. Leathern speculates on this disparity, suggesting, "I think the older users are disproportionately getting targeted by scams and problematic offers."
Despite the survey indicating that TikTok and Meta are perceived as performing the worst in preventing deepfake scam ads, Leathern reiterated the fundamental problem: the absence of reliable, accessible data to genuinely understand the actual performance of these platforms. "Let’s have some independent third parties be able to look at whether you have more fraud and scams than YouTube does," he challenged. Drawing on his own experience, Leathern, who worked on privacy products at Google from 2021 to 2023, added, "Because, look, I’ve worked at both Google and at Meta, and people tell me all the time, the ads on Google ads are terrible. I’d love to have that conversation with real data."
The core challenge lies in the current impossibility for researchers, governments, and other third parties to conduct comprehensive, independent assessments of platform performance. Even the Digital Services Act (DSA) in the European Union, a landmark regulation mandating enhanced data transparency and reporting from major platforms, has yet to yield the kind of granular data required for large-scale audits of ads and advertisers. While acknowledging the DSA’s good intentions, Leathern believes the legislation needs refinement: "I think that they aren’t necessarily requiring the right metrics to be surfaced or the right information to be provided to the public. So I think those laws need to evolve."
Leathern envisions an ideal scenario where platforms recognize scam prevention as a distinct competitive advantage, actively investing in new features and systems to robustly protect their users. He recently proposed a concrete solution: platforms should proactively notify users who have clicked on an ad that was subsequently identified and removed for violating policies against scams and fraud. "These scammers aren’t getting people’s money on day one, typically. So there’s a window to take action," he explained, emphasizing the potential for timely intervention and user education.
Beyond prevention and notification, Leathern advocates for a more radical approach to accountability: platforms should be compelled to donate or otherwise disgorge any money earned from scam ads placed through their systems. Currently, companies like Meta, Google, and TikTok remove fraudulent ads but retain the revenue generated from them. "It certainly shouldn’t necessarily be enriching companies if there’s scammy ads being run," he argued. He suggests that such disgorged funds could be redirected to various beneficial causes, such as funding nonprofit organizations dedicated to educating the public on how to recognize and avoid scams, or supporting victims of online fraud. "There’s lots that could be done with funds that come from these bad guys," he concluded, envisioning a future where ill-gotten gains are repurposed for the greater good, transforming a systemic problem into a catalyst for positive change and accountability in the digital advertising ecosystem.










