AI is quietly becoming a line item in campaign finance reports as political spending expands beyond ads into software subscriptions and synthetic media.
Federal and state filings, along with recent campaign activity in Texas, show candidates and political groups are not only using artificial intelligence tools but also paying for them — in some cases at relatively small amounts that signal early adoption, and in others through large-scale outside spending tied to the industry itself.
Receipts from a now-suspended 2026 Senate bid by Democrat Collin Allred show $191.88 spent on Otter.ai transcription services, according to a filing with the Federal Election Commission. While modest, the expense highlights how AI tools are entering routine campaign operations such as note-taking, communications, and research.
Campaigns are also experimenting with voter-facing AI. Allred’s House campaign website, for example, has featured an AI chatbot designed to answer voter questions.
At the same time, outside spending tied to AI companies is rising sharply. More than $2.8 million has flowed into Texas congressional races from AI-linked super PACs, according to campaign finance data reported earlier this month. Those groups are backed in part by technology industry figures and have supported candidates across multiple races, per reporting from the Texas Tribune.
Some of that spending is more direct. A Texas Defense PAC, a pro-gambling organization supported by Las Vegas Sands, has reported $2,614 in expenditures to Anthropic, according to Transparency USA data. Anthropic is the company behind Claude and a variety of other AI products.
Outside Texas, a congressional campaign committee supporting Democrat Melissa Chaudhry in Washington reported spending $220.70 on OpenAI (ChatGPT) software, according to Federal Election Commission records.
From Software to Deepfakes: AI Now Powers Political Attack Ads
Beyond software subscriptions, AI is also reshaping campaign messaging itself.
One recent example came from Ken Paxton, who posted a campaign-style ad on April 7, 2026, criticizing John Cornyn. The ad depicted Cornyn’s likeness on a beach while missing Senate business.
NEW AD: President Trump’s agenda is on hold because John Cornyn decided to go on spring break.
It’s time to send John Cornyn on a permanent vacation. pic.twitter.com/dA4mKvvpQW
— Attorney General Ken Paxton (@KenPaxtonTX) April 7, 2026
The X account for Team Cornyn quickly responded with a similar line of attack.
Both Paxton and Cornyn have released attack ads that appear to incorporate AI elements.
Social media platforms have struggled to keep pace. When the National Republican Senatorial Committee posted an ad featuring Democratic Senate Candidate James Talarico appearing to read past statements, the platform X added a community note stating: “The video is AI-generated (deepfake), not Talarico speaking. It uses his real past statements … narrated by synthetic voice/likeness with disclosure watermark.”
However, the note itself included a disclaimer that it was “Proposed by an experimental AI contributor.”
James Talarico, in his own words: pic.twitter.com/lDlUoqBbP7
— Senate Republicans (@NRSC) March 11, 2026
State Law Lags Behind AI-Powered Political Attacks
State law has struggled to keep up with the shift. Texas passed a law in 2019 restricting deceptive deepfakes in political advertising, but it applies only to state-level races, only within 30 days of an election, and requires proof of intent to deceive. Federal races, including the high-profile U.S. Senate contest, fall outside its scope, leaving a regulatory gap as AI-generated content proliferates.
In the past, there have been alleged scandals surrounding more inconspicuous misleading or false AI content that is less clearly parody or satire in attack advertising.
The Federal Communications Commission punished a New Hampshire Democratic consultant with a $6 million fine in 2024 for creating an AI-voiced robocall that sounded like then-President Joe Biden urging voters not to vote in the primary.
Yet, regarding the uses of AI, such as the NRSC ad targeting Talarico, University of Houston Law Professor Seth Chandler told Fox 4 that the First Amendment likely covers this messaging. “I think those tweets are legal,” he said. “They are not falsifying what he said. They are using his likeness, but I do not believe that that is unlawful.”
The rise of AI in politics also echoes earlier debates about digital media literacy and satire. A 2024 report by The Dallas Express highlighted backlash after a news outlet “fact-checked” an obviously AI-generated image of then-Candidate Donald Trump as a professional football player, prompting widespread ridicule online over whether such content required formal debunking.
Now, the stakes appear higher. AI is not only shaping viral memes but also influencing campaign strategy, advertising, and spending, from six-figure super PAC investments down to sub-$200 software subscriptions.
As campaigns scale up their use of the technology, the growing presence of AI on campaign finance reports suggests it is becoming a standard and increasingly significant political expense.