Home / Business / Scam Ads Are Flooding Social Media. These Former Meta Staffers Have a Plan

Scam Ads Are Flooding Social Media. These Former Meta Staffers Have a Plan

Scam Ads Are Flooding Social Media. These Former Meta Staffers Have a Plan

The digital landscape is increasingly marred by a relentless deluge of scam advertisements, a pervasive problem that exploits trust and siphons billions from unsuspecting victims worldwide. At the heart of this escalating crisis, two former senior Meta executives, Rob Leathern and Rob Goldman, are stepping forward with a comprehensive plan to inject much-needed transparency into the opaque world of online advertising through their new nonprofit, CollectiveMetrics.org. Their initiative comes at a critical juncture, as the sophistication of scam tactics, fueled by artificial intelligence and deepfake technology, continues to outpace the defenses of major social media platforms.

Rob Leathern, once the public face of Facebook’s (now Meta’s) fight against fraudulent advertising, vividly recalls the early battles. In 2019, when Dutch TV mogul John de Mol initiated legal action against Facebook for its alleged failure to curb scammers exploiting his image, Leathern was dispatched to Amsterdam. He confronted the media, acknowledging the formidable challenge: "The people who push these kinds of ads are persistent, they are well funded, and they are constantly evolving their deceptive tactics to get around our systems," he told Reuters at the time. During his four-year tenure, Leathern led the business integrity unit, a team dedicated to safeguarding Meta’s ad products from malicious actors. His responsibilities extended to spearheading crucial transparency efforts, including the groundbreaking Meta Ad Library—the industry’s first free and searchable repository for digital ads—and implementing identity verification for political advertisers.

Scam Ads Are Flooding Social Media. These Former Meta Staffers Have a Plan

However, Leathern’s perspective shifted dramatically after his departure from Meta at the close of 2020. From an external vantage point, he watched with growing alarm as criminals swiftly adapted, deploying sophisticated deepfakes and leveraging artificial intelligence to craft increasingly persuasive and insidious scam ads. He observed a concerning stagnation in the platforms’ commitment, noting a failure to invest adequately in the teams and technological advancements necessary to combat these evolving exploitative schemes. "The technology and the progress has stagnated the last five years," Leathern lamented in a recent interview. He added, "I also feel like we just don’t really know how bad it’s gotten or what the current state is. We don’t have objective ways of knowing." This lack of objective measurement became a driving force behind his next venture.

Driven by a shared conviction that the problem demands a new approach, Leathern joined forces with Rob Goldman, Meta’s former vice president of ads. Together, they launched CollectiveMetrics.org, a nonprofit organization dedicated to fostering greater transparency in digital advertising to combat deceptive practices effectively. The core mission of CollectiveMetrics is to leverage robust data and analytical insights to quantify critical aspects, such as the actual prevalence of online scam ads. By doing so, they aim to peel back the layers of secrecy surrounding the incredibly opaque ad systems that generate hundreds of billions of dollars in revenue for tech giants like Meta.

Their endeavor is particularly urgent given the staggering global impact of scams. The Global Anti-Scam Alliance (GASA), an influential organization comprising leaders from Meta, Google, and other major platforms on its advisory board, estimates that victims collectively suffered losses exceeding one trillion dollars last year alone. GASA’s 2025 Global State of Scams report paints a grim picture, revealing that nearly a quarter (23 percent) of individuals have fallen victim to a scam, losing money in the process. Compounding the tragedy, the report highlights that many victims refrain from reporting scams due to feelings of shame or simply not knowing where to turn. Among those who did report, a disheartening statistic emerged: more than a third indicated that "no action was taken by the platform after reporting it."

Leathern stresses that the precise scale of scam ads on platforms such as Facebook and YouTube remains unknowable, primarily because these companies consistently withhold access to their internal data for independent scrutiny. "I want there to be more transparency. I want third parties, researchers, academics, nonprofits, whoever, to be able to actually assess how good of a job these platforms are doing at stopping scams and fraud," Leathern asserted. "We’d like to move to actual measurement of the problem and help foster an understanding." This lack of verifiable data makes it challenging for external bodies to hold platforms accountable for their stated efforts.

As a foundational step, CollectiveMetrics commissioned an online survey of 1,000 American adults to gauge public perception of platforms’ efforts against deepfakes and scam ads. The results underscored a profound dissatisfaction among consumers. Nearly half of respondents (47 percent) rated TikTok’s performance as "poor" or "very poor," marking it as the worst-performing platform among those surveyed. Facebook and Instagram, both Meta products, followed closely, with 38 percent and 33 percent of respondents, respectively, criticizing their efforts. The skepticism was even more pronounced among older demographics; 61 percent of individuals over 55 expressed negative views of TikTok, while 47 percent and 43 percent felt the same about Facebook and Instagram. Leathern interpreted these consistently low scores as indicative of a pervasive negative perception regarding the companies’ anti-scam initiatives. "People seem quite more negative than I would have expected," he observed, attributing this decline to a potential "loss of institutional knowledge" within these companies and a glaring absence of effective accountability mechanisms. (It should be noted that Leathern’s wife is currently employed in product marketing at Meta.)

In response to these criticisms, platforms offered their perspectives. Melanie Bosselait, a spokesperson for TikTok, reiterated via email that the company’s Community Guidelines explicitly forbid "attempts to scam, trick or defraud people." She highlighted TikTok’s commitment to user education, citing resources like "How We Fight Scams and Fraud on TikTok," and emphasized their use of a hybrid system of automated and human review processes, which are regularly enhanced.

Daniel Roberts, a spokesperson for Meta, firmly asserted that the company has significantly increased its investment in combating scams since Leathern’s departure. "We aggressively fight scams on our platforms, and as scammers have grown in sophistication in recent years, so have our efforts," Roberts stated. He detailed Meta’s expanded multi-layered approach, which now includes global awareness campaigns, strategic collaborations with cross-industry partners, and the deployment of facial recognition technology specifically designed to detect and remove "celeb-bait" ads. Roberts further claimed a notable achievement, reporting a more than 50 percent decline in user-reported scam ads since the summer of 2024 and the removal of over 134 million scam ads this year alone.

However, these claims of progress are juxtaposed against ongoing legal challenges and reports that suggest a more complex reality. Meta is currently embroiled in a lawsuit in California, brought by Australian billionaire Andrew Forrest, who alleges that Meta’s automated ad systems directly facilitated investment scammers in impersonating him. In court filings, Meta itself disclosed that it had hosted approximately 230,000 scam ads featuring Forrest’s likeness since 2019. Furthermore, an October report from the Tech Transparency Project uncovered that Meta recently garnered at least $49 million from scam advertisers who frequently utilized deepfakes of prominent public figures such as Donald Trump, Elon Musk, and Alexandria Ocasio-Cortez. These revelations fuel Leathern’s hypothesis that a potential reason for the persistence of scam ads is the platforms’ underlying concern that "too much good revenue will get flushed out if they are more aggressive about getting rid of the bad." Roberts vehemently disagreed with this assessment, asserting, "We fight fraud and scams because people on our platforms don’t want this content, legitimate advertisers don’t want it, and we don’t either. That’s why we’re always looking for new ways to stop them and take them down."

The CollectiveMetrics.org survey data clearly indicates that consumers overwhelmingly believe that both digital platforms and governments bear a significant responsibility in preventing scam ads. Yet, only 36 percent of respondents expressed confidence that digital platforms are doing a "very" or "somewhat good" job in fighting deepfakes and scam ads. "Consumers in the US definitely expect both tech companies and the government to help protect them from the potential negative effects of deepfakes," Leathern noted, underscoring the gap between expectation and perceived performance.

The public’s desire for governmental intervention is particularly strong. Just under 50 percent of respondents aged 18 to 54 considered it "very important" for the government to enact laws specifically targeting deepfake ads. This sentiment was even more pronounced among older demographics, with 65 percent of those over 55 strongly advocating for such legislative action. Similarly, 67 percent of respondents aged 55 and older deemed it "very important for online platforms to prevent fraudulent ads," compared to 55 percent of those aged 54 and under. Leathern posits that "older users are disproportionately getting targeted by scams and problematic offers," which explains their heightened concern and demand for protection.

Despite the survey’s findings highlighting public dissatisfaction with TikTok and Meta, Leathern reiterated the critical need for verifiable, independent data to truly understand the performance of these platforms. "Let’s have some independent third parties be able to look at whether you have more fraud and scams than YouTube does. Because, look, I’ve worked at both Google and at Meta, and people tell me all the time, the ads on Google ads are terrible," said Leathern, who also worked on privacy products at Google from 2021 to 2023. "I’d love to have that conversation with real data." The fundamental challenge lies in the current impossibility for researchers, governments, and other third parties to comprehensively assess platform performance. Even the Digital Services Act (DSA) in the European Union, a landmark regulation mandating increased data transparency and reporting from major platforms, has yet to yield the specific kind of data required for large-scale audits of ads and advertisers. While Leathern acknowledges the DSA’s "super well intentioned" goals, he believes that "they aren’t necessarily requiring the right metrics to be surfaced or the right information to be provided to the public. So I think those laws need to evolve."

Leathern envisions an ideal scenario where platforms recognize robust scam prevention not merely as a regulatory burden but as a significant competitive advantage. By proactively investing in cutting-edge features and systems, platforms could genuinely protect their users and build greater trust. He recently proposed a concrete step: platforms should notify users who have clicked on an ad that was subsequently removed for violating policies against scams and fraud. "These scammers aren’t getting people’s money on day one, typically. So there’s a window to take action," he explained, emphasizing the potential for timely intervention.

Beyond notification, Leathern advocates for a more radical shift in financial accountability. He argues that platforms should be required to donate or otherwise disgorge the money earned from scam ads placed through their systems. Currently, companies like Meta, Google, and TikTok typically remove fraudulent ads but retain the revenue generated from them. "It certainly shouldn’t necessarily be enriching companies if there’s scammy ads being run," he contended. He proposed that such ill-gotten gains could be repurposed, for instance, to fund nonprofit organizations dedicated to educating the public on how to identify and avoid various types of scams. "There’s lots that could be done with funds that come from these bad guys," Leathern concluded, highlighting the potential for these funds to become a force for good rather than a source of profit for platforms that inadvertently facilitate fraud. The fight against the flood of scam ads is not just about technology; it’s about transparency, accountability, and a renewed commitment to user safety.

Scam Ads Are Flooding Social Media. These Former Meta Staffers Have a Plan

Leave a Reply

Your email address will not be published. Required fields are marked *