AI Deepfake Laws by State: A 2026 Tracker for Creators and Platforms
At least 20 states have passed or introduced deepfake legislation. We track every law, what it requires, who it targets, and the penalties for violations.
Regulation & Compliance
Editorial Boundary: This article is editorial analysis, not legal, tax, financial, insurance, privacy, or platform-policy advice. Rules vary by jurisdiction, platform, account status, and business structure. Creators should confirm high-stakes decisions with a qualified professional.
The legal landscape around AI-generated non-consensual intimate imagery (NCII) has shifted dramatically in the past 18 months. As of March 2026, more than 20 states have enacted or introduced legislation specifically targeting deepfake pornography, and a federal bill is advancing through committee. For creators — both those who may be victimized by deepfakes and those who use AI tools in their content workflows — understanding this patchwork of laws is now essential.
This tracker covers enacted laws, pending legislation, and the federal landscape. We will update it as new laws pass.
Important note: Legislation moves quickly. Bill numbers, committee assignments, and enacted provisions change between sessions. Creators and legal professionals should verify current bill status through their state legislature's official website or the National Conference of State Legislatures (NCSL) deepfake legislation tracker before relying on any specific provision cited here.
The Federal Landscape
The DEFIANCE Act (S. 3696, 118th Congress) was introduced by Senator Dick Durbin (D-IL) in 2024 and created a federal civil right of action for victims of non-consensual AI-generated intimate imagery. The bill allows victims to sue creators and distributors of deepfake intimate images for damages. It passed the Senate unanimously in 2024 but stalled in the House.
The TAKE IT DOWN Act (S. 4569) was signed into law in 2025, making it a federal crime to publish non-consensual intimate images, including AI-generated deepfakes. The law requires platforms to remove flagged NCII content within 48 hours of receiving a valid takedown request. Violations carry penalties of up to two years in federal prison for individuals and potential FTC enforcement actions against platforms that fail to comply with removal obligations. Critically, this law covers both authentic NCII and AI-generated content, closing a gap that existed in previous federal revenge porn discussions.
The NO FAKES Act (S. 2770) targets unauthorized AI replications of a person's likeness, voice, or visual appearance in digital media. It establishes a federal right protecting individuals against non-consensual digital replication and creates liability for platforms that host such content after receiving notice. As of March 2026, the bill has bipartisan support but has not yet reached a floor vote.
States With Enacted Deepfake Laws
California — AB 1856 and AB 2655 (2024)
California was among the first states to address deepfakes comprehensively. AB 1856 extended existing revenge porn protections (Cal. Civ. Code Section 1708.86) to cover AI-generated sexually explicit material. Victims can seek actual damages, statutory damages of up to $150,000, and attorney's fees.
AB 2655 requires large social media platforms (those with more than one million California users) to remove or label digitally altered content, including deepfakes, within 72 hours of receiving a report. Platforms that fail to comply face civil penalties of up to $100,000 per violation, enforced by the California Attorney General.
Jurisdiction note: These laws apply to content depicting California residents, regardless of where the content was created.
Texas — SB 1361 (2023) and SB 2137 (2025)
Texas was an early mover. SB 1361, enacted in 2023, made it a Class A misdemeanor (up to one year in jail and a $4,000 fine) to create non-consensual deepfake pornography. SB 2137, effective September 2025, elevated repeat offenses to a state jail felony (180 days to two years) and expanded the definition to cover AI-generated voice cloning used in sexual contexts.
Florida — HB 1 (2025)
Florida's omnibus social media and AI bill included provisions making the creation or distribution of non-consensual AI-generated intimate images a third-degree felony, punishable by up to five years in prison and a $5,000 fine. The law applies when the creator knew or should have known the depicted person did not consent.
Virginia — Section 18.2-386.2 (Amended 2024)
Virginia amended its existing revenge porn statute to explicitly include "falsely created videographic or still image" produced through AI or digital manipulation. Violations are a Class 1 misdemeanor (up to 12 months in jail, $2,500 fine), with repeat offenses elevated to a Class 6 felony (one to five years in prison).
Minnesota — SF 1394 (2023)
Minnesota's law is notable for its breadth. It covers the non-consensual dissemination of any "deep fake" depicting a real person in a sexual act, defined as media created or altered by AI to depict events that did not occur. It provides both a criminal penalty (gross misdemeanor, up to one year in jail) and a civil cause of action with statutory damages of $100,000 or actual damages, whichever is greater.
Georgia — SB 375 (2025)
Georgia enacted SB 375 in 2025, making it a felony to create or distribute AI-generated non-consensual intimate images when done with intent to harass, threaten, or defraud. Penalties include one to five years in prison. The law includes an explicit carve-out for satire and political speech, which has drawn criticism from victims' advocates who argue the carve-out is too broad.
Indiana — SB 209 (2024)
Indiana's law classifies non-consensual deepfake pornography as a Level 6 felony (six months to two and a half years in prison, up to $10,000 fine). The statute applies to any person who creates, distributes, or threatens to distribute such material.
Washington — HB 1999 (2024)
Washington's approach combines criminal penalties with platform liability. Creating non-consensual deepfake intimate images is a gross misdemeanor. Platforms that host such content and fail to remove it within 48 hours of a valid takedown request face civil liability of up to $50,000 per image, plus the victim's attorney's fees.
New York — S. 1042A (2025)
New York's deepfake law establishes both civil and criminal liability. Criminal creation or distribution of non-consensual intimate deepfakes is a Class A misdemeanor for first offenses and a Class E felony for repeat offenders. The civil provision allows victims to recover actual damages or $250,000 in statutory damages, whichever is greater, making it one of the highest statutory damage provisions in the country.
Illinois — SB 2123 (2025)
Illinois amended its existing Non-Consensual Dissemination of Private Sexual Images Act (740 ILCS 190) to include AI-generated content. The amendment preserves the existing penalty structure: civil liability with a minimum $10,000 statutory damages award, plus attorney's fees and injunctive relief.
States With Pending Legislation
As of March 2026, the following states have deepfake-related bills in active committee consideration:
New Jersey — A.B. 4387 would make non-consensual deepfake intimate imagery a third-degree crime (three to five years in prison). The bill includes platform notice-and-takedown requirements.
Pennsylvania — H.B. 2600 would classify deepfake NCII as a second-degree misdemeanor for first offenses, with felony escalation for repeat offenders.
Ohio — S.B. 287 would create both criminal and civil remedies, with a provision allowing victims to recover treble damages when the deepfake was created for commercial purposes.
Michigan — H.B. 5143 would amend existing revenge porn statutes to include AI-generated content and create a specific aggravated offense when the victim is a minor.
Arizona — S.B. 1238 would make non-consensual deepfake creation a Class 5 felony (six months to two and a half years in prison) and requires platforms to implement "reasonable" detection mechanisms.
Key Variations Creators Must Understand
Intent Requirements
Laws vary significantly on whether intent is an element of the offense. California's civil statute does not require proof of intent to harm — creating the content is sufficient. Texas requires knowledge that the depicted person did not consent. Georgia requires intent to "harass, threaten, or defraud." These distinctions matter enormously for enforcement.
Platform vs. Creator Liability
Some states (Washington, California) impose direct liability on platforms that fail to remove content after notice. Others (Texas, Indiana) focus exclusively on the person who creates or distributes the deepfake. Creators should understand that in states with platform liability, takedown mechanisms are more robust because platforms are incentivized to act quickly.
Scope of "Intimate Image"
Definitions vary. Some states limit coverage to depictions of sexual acts or exposed genitalia. Others, like Minnesota, use broader language covering any depiction that would be considered "sexual" in nature. New York's law covers "intimate parts" including any depiction that a reasonable person would consider sexual.
Civil vs. Criminal Remedies
Most recent laws provide both criminal penalties and civil causes of action. Civil remedies are often more practical for victims because they do not require a district attorney to decide to prosecute. States with strong civil provisions (California, Minnesota, New York, Illinois) give victims the most direct legal tools.
What This Means for Creators
If You Use AI Tools in Content Creation
Creators who use AI image generation, face-swapping filters, or AI editing tools should understand that these laws can apply even when the creator believes the use is harmless. Generating AI content that depicts a real, identifiable person in a sexual context without their explicit consent is now illegal in a growing majority of states.
Best practice: Never use AI tools to generate intimate content depicting any real person without documented, written consent. "Real person" includes public figures, other creators, and individuals whose likeness appears in training data.
If You Are a Victim of Deepfake Content
Victims now have meaningful legal tools in most jurisdictions. Practical first steps include documenting the deepfake content with screenshots and URLs before it is removed, filing a takedown request with the hosting platform under the TAKE IT DOWN Act (platforms must remove within 48 hours), reporting the content to the FBI's Internet Crime Complaint Center (IC3) if it crosses state lines, and consulting with an attorney in the victim's home state to evaluate both criminal reporting and civil litigation options.
Statute of Limitations
Statutes of limitations vary. California's civil action has a three-year statute of limitations from discovery. Texas criminal charges must be filed within two years. New York's civil action has a one-year statute of limitations from discovery. Victims should act quickly.
The Trajectory
The legislative trend is unmistakably toward broader coverage, harsher penalties, and greater platform accountability. Creators and platforms should expect that within the next 12-18 months, the majority of US states will have some form of deepfake NCII legislation on the books. A federal standard — likely building on the TAKE IT DOWN Act framework — would create consistency but may preempt state laws with stronger protections.
We will update this tracker as new legislation is enacted. Creators who want alerts on legislation in their state can subscribe to our Policy Watch newsletter for real-time updates.
Get the pulse, weekly.
Platform news, creator economy trends, and industry analysis — delivered every Friday.





