




Inside the current and future use of AI in political ads


🞛 This publication is a summary or evaluation of another publication 🞛 This publication contains editorial commentary or bias from the source



When AI Goes Political: The Rise of Machine‑Generated Campaign Ads
The week of October 20, 2025, the public discourse on American elections pivoted on a new technological frontier: artificial intelligence–generated political advertising. The WBUR “Here & Now” episode titled AI Political Ads unpacked how algorithms are beginning to craft persuasive messages, the regulatory gaps that let them spread unchecked, and the real‑world ramifications for democracy.
How the Technology Works
Central to the discussion was the proliferation of “text‑to‑speech” engines and large language models that can produce hyper‑personalized political content in seconds. By ingesting publicly available data—social media profiles, polling demographics, and past voter behavior—AI systems can compose messages that mimic the tone and style of a candidate or political party. The same models can also synthesize video clips or deep‑fake audio that make it appear as if a public figure has endorsed a particular stance.
The episode highlighted a recent example in which a machine‑trained model generated a short video clip purportedly featuring a sitting senator praising a controversial policy. Although the clip was later debunked, it had already been shared thousands of times before fact‑checkers could intervene.
The Regulatory Vacuum
The WBUR report drew on the latest guidance from the Federal Election Commission (FEC). In a 2025 advisory released on October 5, the FEC outlined tentative rules that require AI‑generated ads to be labeled as “non‑human produced” and to disclose the source of the content. The agency also announced a new pilot program in which political committees would voluntarily submit AI‑generated materials for pre‑review before broadcast. However, critics point out that the rules still rely heavily on self‑regulation and lack enforcement mechanisms for violations.
One link the show followed led to the FEC’s official page on “AI and Campaign Finance” (https://www.fec.gov/political-ads-regulation), which details the evolving legal framework. The page notes that while traditional campaign finance law prohibits anonymous or undisclosed advertising, the current statutes do not explicitly mention AI. Consequently, campaigns can, in theory, distribute AI‑produced ads without the same disclosure requirements that apply to human‑crafted ads.
Ethical and Democratic Concerns
The episode also featured interviews with scholars from MIT’s Media Lab and the Center for Security and Emerging Technology. Dr. Lina Patel, a leading researcher in AI ethics, warned that algorithmic bias can amplify echo chambers, allowing political operatives to tailor messages that exploit voters’ psychological vulnerabilities. She cited a study showing that AI‑generated ads that exploit fear and identity politics achieve higher engagement rates than traditionally crafted ones.
Adding to the worry, the report touched on a court case in the Ninth Circuit that involved a state law forbidding “misleading political advertising.” The judges ruled that the law did not apply to AI‑generated content because the language did not specifically include “synthetic” or “automated” material. The case illustrates the difficulty lawmakers face in closing loopholes without stifling legitimate innovation.
Fact‑Checking and Public Response
The episode’s producers followed a link to a Politifact fact‑check titled “AI‑Generated Political Ads: Real or Fake?” (https://www.politifact.com/factchecks/2025/ai-ads). The article catalogued dozens of AI‑generated clips that had circulated during the 2025 midterms, evaluating each for factual accuracy. The fact‑checker concluded that while the majority of the content was fabricated, a smaller subset contained genuine policy positions that were simply repackaged. This nuance underscores the challenge for voters: distinguishing between genuine arguments and algorithmically fabricated propaganda.
The Role of Media and Technology Companies
The discussion included a conversation with executives from a leading social‑media platform that announced a new AI‑detector tool. The tool, designed to flag synthetic media, scans video for inconsistencies in facial movements and audio for unnatural prosody. However, the platform admitted that the detection rates were only about 70% for sophisticated deepfakes, and that false positives could inadvertently suppress legitimate user content.
An article from The Verge (https://www.theverge.com/ai-deepfake-politics) linked by WBUR chronicled the evolution of deep‑fake technology and its political implications. The Verge piece highlighted how the cost of creating convincing AI‑generated political content has dropped from millions of dollars to a few thousand, making it accessible to small campaigns and even individuals. The report also noted that the Verge’s own AI detection system has struggled to keep pace, reinforcing the need for regulatory and technological safeguards.
Looking Ahead
The WBUR episode concluded by mapping out potential paths forward. One proposal is to codify “AI‑generated political content” into the existing federal election law, requiring disclosure and accountability. Another idea is to create a public registry of AI‑generated ads, akin to the Federal Election Commission’s database for traditional ads. Technologists suggest building “digital watermarks” that embed cryptographic signatures into synthetic media, making it easier for fact‑checkers and regulators to trace origins.
Meanwhile, public education initiatives are being piloted in a handful of states to teach voters how to spot synthetic political content. Workshops in high schools are already covering the basics of media literacy and the mechanics of AI manipulation.
Bottom Line
By the end of the episode, listeners understood that AI is no longer a distant threat; it is already shaping campaign narratives, manipulating public opinion, and outpacing the legal and technical frameworks designed to keep elections fair. The combination of cutting‑edge technology, regulatory lag, and the public’s growing vulnerability to algorithmically crafted messages creates a complex, evolving battlefield. Whether lawmakers can close the loopholes, media companies can detect synthetic content effectively, and voters can critically assess the messages they consume will determine the health of democratic discourse in the years to come.
Read the Full WBUR Article at:
[ https://www.wbur.org/hereandnow/2025/10/20/ai-political-ads ]