Ecopyright
Modern & Trending

Deepfakes and Copyright: Protecting Your Voice and Likeness

Ecopyright Editorial · May 13, 2026 · 7 min read · 1,690 words

A voice actor named Hisako recorded twenty hours of narration for an audiobook in 2023. Eighteen months later, she discovered her voice being used in an entirely different audiobook she’d never recorded. A producer had taken samples of her work, trained an AI voice clone, and used the synthetic voice to “narrate” public-domain texts as low-cost audiobooks.

Her actual recordings weren’t being reused. The new audiobook didn’t contain her voice in any traditional sense. But it contained a voice that sounded indistinguishably like hers, performing material she’d never agreed to perform.

Was this copyright infringement? Yes and no. The legal analysis is more complicated than for traditional copying. Here’s how it actually breaks down.

Deepfakes and AI voice cloning sit at the intersection of multiple legal regimes:

Copyright. Protects the underlying recordings used to train the model, and the model’s outputs in some contexts.

Right of publicity / Personality rights. Protects the commercial use of a person’s likeness and voice. State-law in the US; varies by jurisdiction internationally.

Trademark. Sometimes applies if a voice or appearance has become so distinctive that it functions as a brand identifier.

Privacy law. Sometimes applies depending on the context and content.

Specific deepfake legislation. Newer laws targeting non-consensual intimate imagery and electoral deepfakes.

Working through these regimes requires understanding which one (or combination) applies to your specific situation.

For deepfakes and AI voice clones, copyright addresses two specific things:

1. The training data

If the AI model was trained on your copyrighted recordings, the training itself may be infringement. The training question for AI is being actively litigated in 2026.

For voice actors and audio creators, this is the relevant copyright question. Your original recordings have copyright. Use of those recordings to train a voice cloning model without authorization may violate that copyright.

For the broader AI training question, see our AI art piece. Many of the same legal issues apply.

2. The output recordings

The synthetic voice generated by the AI is a more complex question. Some considerations:

  • If the synthetic voice exactly recreates parts of your original recordings, it’s clearly derivative work
  • If the synthetic voice generates new content that sounds like you but doesn’t copy specific recordings, copyright may not apply directly
  • The “voice itself” isn’t typically copyrightable (you can’t copyright the sound of a voice)
  • The specific output recordings have their own copyright (typically owned by whoever created them)

This is where copyright reaches its limits. The new synthetic recordings exist as their own copyrighted works, owned by whoever produced them. Your copyright in your original recordings doesn’t automatically extend to outputs that don’t directly copy them.

What rights of publicity cover

Where copyright doesn’t reach, rights of publicity often do. The right of publicity protects commercial use of a person’s identifying characteristics:

  • Voice (your distinctive sound)
  • Likeness (your face and appearance)
  • Name (especially celebrity names)
  • Signature (sometimes)
  • Distinctive personal characteristics

When someone creates a deepfake of you and uses it commercially without authorization, you typically have a right-of-publicity claim regardless of whether copyright applies.

The complications:

State-by-state variation in the US. Each US state has its own right of publicity law. Some are strong (California, New York, Tennessee), others minimal. The applicable law usually depends on where the violation occurs or where you’re domiciled.

Posthumous rights. Some states (notably California) grant posthumous rights of publicity for many decades after death. Others extinguish the right at death. This matters for using deceased people’s likenesses.

Commercial vs non-commercial. Right of publicity typically applies to commercial use. Non-commercial uses (parody, criticism, news reporting) may have constitutional protection that overrides right of publicity.

International variation. Outside the US, “image rights” or “personality rights” exist in many countries (notably France, Germany, UK) with varying scope.

For voice actors and performers, right of publicity is often the stronger claim than copyright when deepfakes appear.

Several laws and cases have been moving in 2025-2026 specifically targeting deepfakes:

ELVIS Act (Tennessee, 2024). Specifically prohibits unauthorized AI cloning of voices, with enhanced penalties for commercial use. The most explicit anti-AI-voice-cloning law in the US.

No FAKES Act (federal, proposed). Bipartisan US bill that would create federal protection against unauthorized digital replicas. Status varies as of mid-2026.

EU AI Act provisions (2024-2025). Includes specific requirements for AI-generated content disclosure.

California AB 2602 (2024). Requires consent and explicit terms for AI digital replicas in entertainment contracts.

Various state bills. Multiple US states have passed or are considering deepfake-specific legislation, particularly around non-consensual intimate imagery and electoral deepfakes.

The legal landscape is shifting rapidly. What’s possible today may be more constrained next year as legislation matures.

What to do if you’ve been deepfaked

For someone whose voice or likeness appears in unauthorized AI-generated content:

Step 1: Document everything

  • Screenshots of the deepfake content
  • URLs and platform information
  • Date of discovery
  • Description of how you became aware
  • Comparison to your authentic work

Based on the specific situation:

  • Is your original copyrighted work being used (training or sampling)?
  • Is the use commercial?
  • Does your jurisdiction have right of publicity protection?
  • Is there a specific deepfake law applicable (electoral, intimate imagery, etc.)?
  • Is the use deceptive (passing off as actual you)?

Different angles support different responses. Often multiple apply.

Step 3: Platform takedowns

Major platforms have policies against unauthorized deepfakes:

  • YouTube has specific deepfake policies and takedown processes
  • Meta platforms have AI-generated content policies
  • TikTok has synthetic media labeling requirements
  • Twitter/X has manipulated media policies

File reports through each platform’s specific tools, identifying the unauthorized use and providing your evidence of authentic identity.

For the platform-specific takedown approach, see our DMCA guide. Deepfake takedowns often combine copyright and policy-violation claims.

For substantial commercial deepfake use:

  • Cease and desist letter to the deepfake creator (if identifiable)
  • Action under right of publicity laws
  • Copyright infringement claims if training data is implicated
  • Specific deepfake legislation if applicable in your jurisdiction

This is more complex than typical copyright enforcement. Consult with an attorney who handles both copyright and right of publicity matters.

Step 5: Industry-specific action

For voice actors and performers, industry organizations have been working on collective enforcement:

  • SAG-AFTRA (US film/TV/voice union) has specific AI provisions in recent contracts
  • Voice actor organizations have developed model rider language for contracts
  • The Independent Audiobook Publishers Association has guidance for industry standards

Joining or coordinating with industry organizations can amplify individual enforcement.

Pre-deepfake protection

The hard truth: once your voice or likeness is in AI training data, prevention is largely impossible. Protection is more about positioning for response than preventing use.

What you can do proactively:

1. Document your authentic work

Register your original recordings, photographs, and video work with appropriate copyright services. This establishes the baseline of authentic content.

For the audio-specific registration approach, see our audio guide. The same approach applies to actors, models, and others with image/voice exposure.

2. Contractual protections

For commercial recordings and performances:

  • Include explicit AI/digital replica clauses in contracts
  • Specify that voice/likeness use is limited to the specific work
  • Address training data use explicitly (prohibit, permit with conditions, etc.)
  • Include compensation for AI use if granted

Standard performer contracts now increasingly address these issues. Make sure yours do.

3. Watermarking and provenance

Some tools embed invisible watermarks in audio and video that survive AI processing. These can help establish source attribution even after content has been processed.

Content provenance systems (Project Origin, C2PA standards) are developing to track content authenticity through processing chains. Adoption is uneven but growing.

4. Public position

Some performers publicly state their position on AI cloning, making it explicit that they haven’t authorized any AI use. This doesn’t legally prevent unauthorized cloning but establishes the record for later enforcement.

5. Monitor and respond

Periodic searches for your name, voice, or likeness in AI-generated content. Detection tools for synthetic media are improving. Quick response when unauthorized use appears.

Hisako’s resolution

Hisako, from the opening, eventually got the deepfake audiobook removed through a combination of:

  • DMCA notices targeting the training data use
  • Right of publicity claims under California law (where the producer was based)
  • Platform reports identifying the synthetic content
  • An eventual cease and desist that resolved without litigation

Total elapsed time: 11 weeks. Total cost: about $2,500 in legal fees. The producer settled rather than fight.

Her experience illustrates the realistic path: a combination of legal angles, persistent enforcement, and willingness to engage formally when platform reports aren’t enough. None of the responses individually solved the problem; together they did.

What’s coming

Three things to watch in 2026 and beyond:

Federal legislation. Multiple US bills addressing unauthorized AI replicas are pending. Federal action would create more consistent protection than the current state-by-state patchwork.

Industry standards. Performer unions, voice actor organizations, and platforms are developing standards for consent, attribution, and enforcement.

Detection technology. AI-detection tools for synthetic media are improving. The arms race continues but detection is becoming feasible at scale.

For working performers and creators with voice/image exposure:

  • Stay informed about legal changes
  • Update contracts to address AI/replica use
  • Document authentic work consistently
  • Build relationships with industry organizations that handle these issues

The legal infrastructure for deepfake protection is still being built. The fundamentals (copyright in original works, rights of publicity for likeness, contractual protections) provide foundation. The specific deepfake provisions are catching up.

For the related question of who owns AI-written content more broadly, see our companion piece.

The honest assessment

Deepfakes will continue to be a complicated legal area. The technology evolves faster than law. The mismatch between the technical capability (generate convincing synthetic media easily) and the legal framework (designed for traditional copying) creates persistent gaps.

What you can do:

  1. Register your authentic work consistently. Provides the foundation for any enforcement.

  2. Use rights-of-publicity protections. When copyright doesn’t reach, this often does.

  3. Update contracts. Explicit AI/replica provisions are now standard for serious work.

  4. Engage with platforms quickly. They have policies; using them is what makes them effective.

  5. Watch the legal landscape. What’s not possible today may be possible next year as legislation catches up.

The performers and creators who handle this well aren’t the ones who avoid AI exposure entirely (basically impossible). They’re the ones who systematically document, register, and respond when issues arise. The legal toolkit isn’t complete, but it’s enough to handle most cases that matter.

Ready to copyright your work?

5 free tokens on signup. $1 per certificate after that. No credit card needed to start.