The FCC is moving forward on a plan to require broadcasters to identify political ads that include AI content. It’s a priority for the Democratic chairwoman; however, the senior Republican on the commission has clearly voiced his outrage.
The FCC said Thursday that it is moving forward with a proposal to implement new AI transparency requirements for radio and TV broadcasters. Specifically, it voted along party lines to open a notice of proposed rulemaking. The NPRM would require that broadcasters provide an on-air announcement for all political ads that include AI-generated content.
The commission also proposes requiring licensees and regulatees to include a notice in their online files for all political ads that include AI-generated content disclosing that the ad contains such content.
Besides these proposed disclosure requirements, the FCC says it is not proposing to ban or otherwise restrict the use of AI-generated content in political ads.
In addition, the FCC in the proposal says that all radio and television broadcast stations that air political ads would be required to inquire whether scheduled political ads contain AI-generated content.
Broadcasters would be required to use standardized language for the on-air disclosure. “For radio ads, we propose that broadcasters provide an on-air announcement orally in a voice that is clear, conspicuous and at a speed that is understandable, stating that: ‘The following message contains information generated in whole or in part by artificial intelligence,’” said the FCC.
In his dissent, Commissioner Brendan Carr, the senior Republican on the commission, said the majority’s attempt to fundamentally alter the regulation of political speech just a short time before a national election is as misguided as it is unlawful. He noted how the FCC has stated it hopes to complete the rulemaking before election day in November.
“This is a recipe for chaos,” Carr wrote in his dissent. “Even if this rulemaking were completed with unprecedented haste, any new regulations would likely take effect after early voting already started. And the FCC can only muddy the waters.”
[Related: “Carr Condemns Rosenworcel’s Effort to Regulate AI-Generated Political Ads“]
The NPRM states that the presentation of political programming has long been considered an essential element of broadcasters’ obligation to serve the public interest, but it now says that artificial intelligence has become powerful enough to mimic human voices and create life-like images. The commission says its proposal seeks to bring uniformity to a patchwork of state laws that govern AI and deepfake technology in elections.
The proposal says it believes the use of AI technologies would provide a number of benefits for candidates. “The use of AI-generated content could help candidates and issue advertisers tailor their messages to specific communities. For example, a campaign could use AI tools to generate messages targeted to the unique concerns of certain demographics or to produce content in the candidate’s voice in multiple languages,” the NPRM states.
The FCC acknowledges it has yet to adopt a specific definition of “artificial intelligence” but plans to seek comments on how. However, it proposes defining “AI-generated content” for purposes of this proceeding as “an image, audio, or video that has been generated using computational technology or other machine-based system that depicts an individual’s appearance, speech, or conduct, or an event, circumstance, or situation, including, in particular, AI-generated voices that sound like human voices, and AI-generated actors that appear to be human actors.”
“Today the FCC takes a major step to guard against AI being used by bad actors to spread chaos and confusion in our elections. We propose that political advertisements that run on television and radio should disclose whether AI is being used,” Chairwoman Jessica Rosenworcel said in a statement. “There’s too much potential for AI to manipulate voices and images in political advertising to do nothing. If a candidate or issue campaign used AI to create an ad, the public has a right to know.”
Of particular concern to the commission is the use of AI-generated “deepfakes” − altered images, videos, or audio recordings that depict people doing or saying things they did not actually do or say, or events that did not actually occur.
Chairwoman Rosenworcel in her statement cited several cases of AI-generated material being used to possibly influence potential voters. “This year in the primary election in New Hampshire, thousands of voters got an AI-generated robocall impersonating President Biden that told them not to vote. This past summer, the campaign of (Florida) Governor DeSantis was flagged for circulating fake AI-altered images of former President Trump,” she said.
The FCC has tentatively concluded that the proposed on-air and political file disclosures would not violate the First Amendment rights of the candidates or other entities that sponsor political ads.
Political ad spending is expected to be record setting for this political cycle, according to AdImpact, with political ad expenditures reaching $10.7 billion.
This potentially sets up a pitched battle between competing government agencies each seeking to regulate the use of AI in political advertising. The Federal Elections Commission is also considering a petition for rulemaking that would clarify campaign law to prohibit deliberately deceptive AI-generated content in campaign ads.
“While the FEC can regulate AI use in online advertisements for federal candidates, the FCC can focus on the areas where the FEC is not able to act. The FEC does not oversee television and radio stations,” the FCC said in a statement outlining today’s proposal.
The FCC says under the law, FEC authority over campaigns is limited to federal political candidates and does not extend to independent issue campaigns or state and local elections.
FEC Chair Sean Cooksey earlier this year said the FCC intervening with its own proposal would “sow chaos” and “invade the FEC’s jurisdiction.”
A comment date for the Notice of Proposed Rulemaking (MB Dockets No. 24-211) will commence 30 days after its publication in the Federal Register. Reply comments will be due 45 days after the date of publication.
Read the NPRM at this link. Individual statements by the commissioners are at the end.