EditorNodesPricingBlog

White House AI Plan Targets Hollywood's State Protections

March 20, 2026
White House AI Plan Targets Hollywood's State Protections

The White House, Public domain, via Wikimedia Commons

Share this post:

White House AI Plan Targets Hollywood's State Protections

On March 20, 2026, the White House released a four-page document titled "A National Policy Framework for Artificial Intelligence: Legislative Recommendations." It tells Congress what the administration wants from AI law, and two of its positions land directly on issues Hollywood has been fighting over for years: whether training AI models on copyrighted material is legal, and who controls the rules protecting digital replicas of performers.

White House officials speaking at CES 2026 about AI and technology policy
Xuthoria, CC BY-SA 4.0, via Wikimedia Commons

What the Framework Says

The document lays out six policy priorities for Congress: child safety, community protection, intellectual property, free speech, AI innovation, and workforce development. On intellectual property, the administration takes a clear position. Training AI on copyrighted material does not inherently violate copyright law. Rather than asking Congress to codify that stance into statute, the framework directs courts to resolve any outstanding fair use questions through ongoing litigation.

On digital replicas, the document proposes a federal standard with consent requirements for using a person's voice or likeness in an AI generated work. That standard would include explicit exceptions for parody, satire, news reporting, and other First Amendment protected speech.

The Preemption Fight Starts Now

The section that most directly threatens what Hollywood's unions have built is the framework's call to preempt state AI laws. White House AI advisor David Sacks has said publicly that a "patchwork" of state regulations would be too burdensome for AI developers, and the document backs that position. It recommends Congress pass a single national AI standard that supersedes state level rules. States would retain authority over child protection, consumer fraud, and traditional police powers. Everything touching AI development would move to federal jurisdiction.

California Governor speaking at a state policy press conference
CAL FIRE_Official, Public domain, via Wikimedia Commons

California is the state most directly in the crosshairs. Its AB 2602 and AB 1836, signed by Governor Gavin Newsom in September 2024 and enforceable since January 1, 2026, are the most comprehensive actor likeness protections in the United States. AB 2602 requires explicit consent before a living performer's digital replica can be used in place of their actual services. AB 1836 extends similar rules to deceased personalities. If Congress passes the framework's proposed preemption language, both laws could be invalidated or stripped of enforcement authority.

For a full breakdown of what those California laws currently require, see our guide to California's digital replica law in 2026. The original legislation is covered in detail in California Makes Unauthorized AI Digital Replicas of Actors Illegal.

California is not alone. Tennessee's ELVIS Act, which protects musicians' voices and likenesses from AI replication without consent, and similar statutes in New York and Illinois, would all fall under the framework's preemption logic.

Copyright and Fair Use: Where the Courts Stand

The framework's decision to leave fair use to the courts rather than to Congress follows a precedent already set. In June 2025, a federal judge in San Francisco ruled that Anthropic's training on copyrighted books qualified as fair use, though the ruling attached conditions on how training data was obtained. That decision gave AI companies one favorable data point, but multiple cases remain active. Major suits brought by publishers, visual artists, and film studios against AI developers are still working through the courts.

Hollywood street sign and commercial buildings in Los Angeles
Chris Long from Los Angeles, CA, USA, CC BY 2.0, via Wikimedia Commons

The Motion Picture Association's copyright enforcement actions have been active throughout early 2026. In February, the MPA sent cease and desist letters to ByteDance over its Seedance 2.0 model after the tool generated unauthorized video of recognizable performers. ByteDance subsequently halted the planned global launch. The White House framework does not speak directly to that type of output case, but its position that training on copyrighted material is not inherently illegal complicates the industry's litigation strategy for the underlying training claims.

The EU is moving in the opposite direction. The European Parliament voted in March 2026 to require AI companies to disclose and fairly compensate rightsholders for copyrighted content used in AI training. That EU copyright vote puts the US and Europe on a direct collision course over AI training rules.

Documentary filmmakers face specific exposure under these shifts. A March 2026 IDA legal panel addressed how AI training cases are reshaping archival footage and historical content rights. Those findings are covered in our copyright law guide for documentary filmmakers.

What the Guilds Face

SAG-AFTRA's current strategy assumes that state laws and union contracts form a legal floor of protection. The union's February 2026 proposal to treat AI performer usage as a taxable budget event, with proceeds directed to its Pension & Health Plans, depends on that floor holding. A federal framework that removes or weakens state consent requirements would change the conditions under which the union negotiates every future production deal.

West side of the United States Capitol building in Washington D.C.
Martin Falbisoner, CC BY-SA 3.0, via Wikimedia Commons

The DGA faces a parallel problem. Christopher Nolan became DGA President in September 2025 and chairs the guild's AI and Theatrical Creative Rights Committees. With the DGA contract expiring June 30, 2026, Nolan has made AI protections a stated priority. His strategy assumed studios would negotiate under pressure from existing legal constraints. A federal framework that softens those constraints changes the starting conditions for every guild entering talks this year.

Congress Still Has to Act

The framework is a recommendation, not a law. The administration is asking Congress to translate these positions into legislation, and no timeline is guaranteed. The White House has pushed for action in 2026, citing competition with China's AI development as a reason to move quickly.

Until legislation passes, California's AB 2602 and AB 1836 remain enforceable. SAG-AFTRA's contracts stay in force. The Anthropic fair use precedent is one district court ruling, not settled law. Every active lawsuit continues on its current track.

What the framework does is signal where the executive branch stands and what it will support in Congress. For Hollywood's legal teams, union negotiators, and independent filmmakers using AI generation tools, that signal is worth reading closely.

Sources

White House | Associated Press | Los Angeles Times | Fortune | Fox News