AI-Generated Ads Reshape Campaigns, Raising Concerns Over Misinformation

Courtesy of Emory Magazine

By Leyah Jackson

As election season ramps up, AI-generated ads are emerging as a common tool in campaigns, raising concerns about misinformation and voter trust. AI has been used to enhance speech, turn politicians into cartoons, and even mimic a rival politician’s voice. In the Massachusetts gubernatorial race, the campaign of Republican primary candidate Brian Shortsleeve created an AI-generated radio ad mimicking Democratic Gov. Maura Healey’s voice to say things she had never actually said. 

The clip does not contain an explicit AI disclaimer informing listeners that Gov. Healey is not actually speaking. Rather, the audio includes a caption that says it’s what her radio ads would sound like if she were honest, according to NBC.

Shortsleeve’s campaign has also released other content of Healey, in one of which she is depicted as the Grinch, also without an explicit AI disclaimer.

“Frankly, I don’t think politicians and campaigns should use AI at all. A lot of times, they end up taking on fewer interns and staffers, putting people out of a job. Managing misinformation is difficult enough even without AI,” said fourth-year political science major Phaedra Hyche.

According to media buying and marketing company Media Culture, producing ads with the help of AI can be efficient and cost-effective. The cost can range anywhere from around $1,000 to hundreds of thousands. For candidates with smaller budgets, AI-generated ads and imagery eliminate some financial burden.

“Anytime generative AI is used to create messaging or imagery that is misleading, I hope we can all agree that’s a negative thing,” said Mark Jablonowski, the CEO of DSPolitical, a progressive advertising firm, in an interview with NBC.

While efficiency and low cost are great pros for a campaign, it runs the risk of the spread of misinformation. Videos imitating politicians can be misleading, especially if they are not supplemented with proper disclaimers. Voter trust is crucial in midterm elections. If not properly managed and left unchecked, AI can spread false information that cannot be taken back.

“AI is just a quick and cheap way to create lasting harm to the very communities our elected officials are supposed to protect,” Hyche continued.

AI usage in political ads is heavily regulated at the state level. Twenty-six states have laws that regulate the use of political deepfakes, or deceptively realistic video and audio. The state laws either require that the ads include a disclosure about deepfakes or prohibit their use within a certain time frame before an election, according to the National Conference of State Legislatures. 

At the federal level, AI regulation has not yet been signed into law. The REAL Political Advertisements Act, which would require AI disclosures on political ads, did not make it to a vote after being introduced in 2023.

Leyah Jackson

First-year journalism and communication major from Cleveland, Ohio. I enjoy writing about politics and entertainment. I am a writer for WHOV and a broadcaster for WHOV 88.1. 

Connect with me on LinkedIn

Next
Next

Cuba's Current Crisis