New State Legislative Efforts to Stem the Tide of AI-Generated Election Disinformation

by Voting Rights Lab

March 26, 2024

June 18 Update: Legislation aimed at regulating the use of AI and deepfakes for political purposes – including materials directly relevant to our elections – has continued to proliferate since we first published this Hot Policy Take in March. To keep up with that growth, we’ve updated our analysis below.

The rapid growth of the frequency and sophistication of AI-generated content is well-documented – as is the proliferation of inaccurate AI-generated, election-related mis- and disinformation both domestically and abroad. In 2023, several audio recordings featuring false AI-generated conversations rocked Slovakia just two days before Election Day. Here in the United States, robocalls featuring a false AI-generated voice impersonating President Biden reached thousands of New Hampshire voters ahead of the state’s February primary.

As the first American presidential election year in the generative AI era, 2024 inevitably will serve as a test case for both the spread of AI-generated election disinformation and the efficacy of states’ various approaches to addressing it. This challenge is complicated by platforms’ variable willingness to assist in the detection and removal of fraudulent content, the serious difficulty in detecting the source of certain content, and of course the interplay of state laws with constitutional protections and federal laws concerning speech and online content.

But with this challenge comes an opportunity to identify best practices for mitigating a problem that will likely escalate and evolve over time. Indeed, lawmakers throughout the country have moved quickly to propose legislation to address this emerging threat. As of June 2024, our team has tracked 118 bills in 42 state legislatures containing provisions intended to regulate the potential for AI to produce election disinformation.

While some legislative efforts – for example – aim to provide transparency around AI-generated content others seek to penalize those who intentionally use AI to mislead voters. In this month’s Hot Policy Take, we examine three states where such legislation has advanced toward ultimate passage and implementation: Wisconsin, Florida, and Arizona.

Wisconsin: Mandated Disclaimers, $1000 Fine

Wisconsin’s often intractable legislature worked uncharacteristically quickly to pass a bill addressing the need to identify AI-generated material. A.B. 664 was introduced in November of 2023 and quickly made its way through both chambers. It was signed by Governor Tony Evers on March 21. 

Wisconsin’s law takes one of the simpler, least restrictive approaches to AI-generated content. The law defines “synthetic media” to include any audio or video content produced in whole or in part by generative AI. Certain political campaign-affiliated entities already regulated by Wisconisin law would be required to add a disclaimer noting the use of generative AI for any covered content they release. Failure to comply with the requirement would be punishable by a $1,000 fine for each violation. Of note, the law does not address the threat of AI-generated content from non-campaign-affiliated entities.

Florida: Mandated Disclaimers, Criminal Misdemeanors

Florida’s legislature took up the issue of generative AI early in the 2024 session, with the legislature passing H.B. 919 prior to adjournment earlier this month, and Governor Ron DeSantis signing the bill into law in April.

Like Wisconsin’s bill, H.B. 919 only requires Governor Ron DeSantis’s signature to become law. Unlike Wisconsin’s bill, however, H.B. 919 underwent significant redrafting and amending as it worked its way through the legislature – highlighting the complexity of the novel questions lawmakers are currently grappling with.

Florida’s law requires specific disclaimers for AI-generated products of a certain size and/or length. Failure to include the required printed, audio, or visual disclaimer is a criminal misdemeanor punishable by up to one year of incarceration.

Arizona: Limited Mandated Disclaimers, Civil Cause of Action, Criminal Felonies, First Amendment Exceptions

Arizona’s legislature continues to work on legislation to combat AI-generated disinformation. Governor Katie Hobbs signed two bills, S.B. 1359 and H.B. 2394, into law in May.

Unlike the Wisconsin and Florida laws, the Arizona laws limit their application specifically to digital impersonation of a candidate or elected official through synthetic media (similar to the issue in Slovakia). Additionally, S.B. 1359’s disclaimer requirements only apply during the 90 days preceding an election.

Both Arizona laws provide a more extensive remedial scheme than those in Wisconsin and Florida. S.B. 1359, like the Florida law, imposes civil penalties for failure to comply with disclosure requirements.

Instead of grounding its protections in requiring a disclaimer like the other laws discussed – and the vast majority of bills pending in states around the country – H.B. 2394 would create a civil cause of action that would allow an aggrieved party to file suit seeking an injunction against the individual driving the creation and spread of the material and monetary damages in certain circumstances.

These two Arizona laws also offer insight into the tricky interaction state legislation seeking to regulate AI-generated election content will have with the First Amendment and other federal laws. The laws variously create exceptions for media, satire or parody, internet providers covered by Section 230 of the federal Communications Decency Act, and those deemed “public figures” at the time of publication.

Conclusion

AI-generated election disinformation is a developing threat that’s already making its mark on the 2024 presidential election. Some states have worked quickly to get out ahead of the issue and keep up with the threat of bad actors eroding confidence in our elections with synthetic media. Wisconsin, Arizona, and Florida are exemplary of what these early efforts to contain AI-generated election disinformation look like.

But this is a story that’s only begun to be written. As AI technologies continue to develop, states and election offices will be forced to adapt in response to an ever-evolving threat. The 2024 federal election will be an important proving ground for how AI-generated election falsehoods take root, and how effective state policies are in fighting back.

We will be watching closely to see which state bills addressing AI-generated election disinformation are implemented and tested in the 2024 elections. As always, you can track election-related bills in all 50 states and D.C. with our State Voting Rights Tracker.

Issues: Emerging Issues

Other Topics: Analysis