Home Software Blog FAQ About Contact ⬇ Download Free
← Back to Blog
AI & Law

Is AI Voice Cloning Legal in 2026? A Plain-English Guide for Creators

📅 April 26, 2026 · 11 min read · By Rai

Short answer: yes, in most places, when you have permission. The interesting question is everything that follows.

Voice cloning sat in a legal grey zone for years. The technology moved faster than legislators, and a lot of what was technically possible was technically also un-prosecutable. That changed quickly in 2024 and 2025, and as of April 2026 the landscape looks completely different. Forty-six US states now have at least one law touching AI-generated voice. The federal Voice Cloning Protection Act is moving through committee. The EU AI Act becomes fully applicable on August 2, 2026. China updated its synthetic-media disclosure rules in early 2026.

I write this blog as the developer of a free voice-cloning tool, not a lawyer. What follows is the practical reading — the rules creators, podcasters, indie game devs and small studios actually need to keep in their head. If your project is high-risk (you're cloning a celebrity, you're a political campaign, you're advertising commercially) you should talk to an actual entertainment lawyer in your jurisdiction. This piece is to keep you out of trouble, not to replace counsel.

The simple rule that solves most cases

Cloning a voice you have permission to clone, for a purpose the person you cloned would agree to, is legal almost everywhere. Cloning a voice without consent — especially a public figure, especially for content that could be mistaken for them, especially where money changes hands — is increasingly illegal even in jurisdictions that don't have a specific voice-cloning statute, because older laws (right of publicity, defamation, fraud) already cover most of it.

Hold that one rule and you'll be fine for 95% of use cases.

Tennessee's ELVIS Act — the first "voice is property" law

Tennessee passed the ELVIS Act (Ensuring Likeness, Voice and Image Security Act) in March 2024. It was the first US state law to explicitly treat a person's voice as a protected likeness. As of 2026 it has been the model for similar bills in 14 other states.

What it actually says: producing or distributing an AI-generated audio recording that simulates a specific identifiable person's voice without their consent is a Class A misdemeanour, with civil penalties on top. Critically, it covers the generation too, not just commercial use — meaning you can be liable even if you didn't sell the resulting clip.

What it doesn't cover: news reporting, parody, satire, public-interest commentary, and (importantly) generating your own voice. The bill is aimed at protecting Tennessee's enormous music industry from AI fakes of country and gospel artists. If you live anywhere with an active music scene — Nashville, Austin, LA, Atlanta — assume something similar applies or is on its way.

California AB853 — disclosure and watermarks

California took a different angle. AB853 (effective January 2026) doesn't ban cloning; it requires disclosure. Any synthetic media distributed in California that depicts a real person must be marked as AI-generated, and platforms hosting such content must surface that marking to viewers. The law also requires the original generation tool to embed a watermark or provenance signal.

Practically, this is the law most creators will brush up against. If you're publishing a podcast on Spotify with a cloned voice and your audience includes Californians, you need a clear disclosure. The good news is that "this episode contains AI-generated voice" in the show notes plus a brief verbal mention at the start usually satisfies it.

The federal Voice Cloning Protection Act

Introduced 2024, advanced through committee in late 2025, currently waiting on a floor vote. The headline provisions:

  • Voice cloning a person without their explicit, informed, written consent for distribution becomes a federal civil violation.
  • Damages cap at $50,000 per unauthorised work, or actual damages if higher.
  • Carve-outs for news, satire, education and personal use (your own voice, your own family with permission, internal demos).
  • Section 230 protection for platforms is preserved if they remove infringing content within 48 hours of notice.

It's not law yet, but it's worth knowing about because once it passes (most observers expect 2026 or 2027) it harmonises the patchwork. Right now if your podcast crosses state lines, you have to think about Tennessee, California, New York, Florida and Illinois separately.

The EU AI Act

Fully applicable August 2, 2026. Voice cloning falls under the "deepfake" disclosure obligations: any AI-generated content that appears authentic must be labelled as such, in a way that's perceptible to the audience. Tools that generate synthetic media — including the open-source ones — are expected to embed machine-readable provenance markers.

The Act is risk-tiered. A creator making a satire podcast is in the lowest tier and basically just needs to disclose. A vendor selling voice cloning to political campaigns or banks is in the highest tier and has substantial compliance obligations.

Important point for non-EU creators: the Act applies to anything distributed into the EU, not just produced there. If your audience includes Europeans, the disclosure rule applies to you.

What about cloning yourself?

Universally legal. Your voice is yours. Use a free tool like RBS Voice Cloner V2, record a 30-second sample of yourself, generate as much as you want. The disclosure rules above (California, EU) still apply if the resulting audio is presented as not AI-generated — but if you say "this is me, with my voice cloned by AI", you're fine.

This is by far the most common legitimate use case. Audiobook authors who want their own voice on their book without recording every word. Bloggers who want articles read aloud in their voice. Language learners practising. Accessibility — turning long emails into audio you can listen to while doing the dishes. None of it is legally fraught.

What about cloning historical figures or public-domain voices?

Murky, and getting murkier. The traditional rule was that personality rights expire on death (with a runway — California gives 70 years, Tennessee gives 50, Indiana gives 100). So cloning Mark Twain's voice for an audiobook would be fine; cloning Kurt Cobain's voice for the same purpose would not.

The 2024 wave of state laws muddied this. Tennessee's ELVIS Act, for example, explicitly extends posthumous protection. As of April 2026 several other states are considering doing the same.

Practical advice: if the person died before 1955, you're almost certainly fine. If they died after 1980, get rights or pick someone else. Anything in between, talk to a lawyer.

What about parody and satire?

Constitutional protection in the US, less clear in Europe. The classic test is whether a reasonable person would understand the work as commentary, not as an authentic statement by the subject. SNL doing a cloned-voice impression of a politician for a sketch is firmly satire. A YouTube channel posting "leaked audio" of the same politician, even with a tiny disclaimer, is firmly not.

The line is intent and presentation. Satire is obvious; deception is not. If you have to add a disclaimer to make it legal, you're probably already in trouble.

The voice-clone scam problem

None of this matters if a scammer with a free tool uses it to clone someone's voice and call their grandmother asking for emergency money. ScamWatch HQ reported in early 2026 that approximately one in ten Americans has now received a voice-clone scam call. The technology is now in everyone's hands and a small percentage of "everyone" is malicious.

What this means for you as a creator: assume your audience knows AI voice cloning exists, assume they're alert to fakes, and disclose. Trust is the asset.

What this means for normal people: agree on a verbal "safe word" with your family. If anyone ever calls claiming to be you in distress, they should be able to say the safe word. Banks have started doing this for high-value transfers — it's worth doing in your personal life too.

Quick checklist before you publish

  • Are you cloning your own voice, or someone who has explicitly agreed in writing? If yes to either, you're almost certainly fine.
  • If you're using a public-domain or historical voice, is the person dead long enough that posthumous rights have expired in every jurisdiction your audience lives in?
  • Will your content reach California, the EU, or any of the 14 ELVIS-style states? Add a clear AI-disclosure line.
  • Is the content satire? Make that obvious in the framing, not in a fine-print disclaimer.
  • Are you using the cloned voice in any context where a listener might mistake it for real (impersonation, fake news, fundraising)? Don't.

A note on what RBS Voice Cloner does and doesn't do

The tool I make doesn't enforce any of this — it's a model running on your machine, it doesn't know whose voice you're feeding it. That's a deliberate design choice (privacy, no telemetry, no internet calls). It's also a responsibility transfer: if you build something with the tool, you're the one accountable for what you make.

I'd ask one thing. Don't use it to deceive people. The legal landscape will keep tightening in 2026 and 2027, but the ethical line was always clear and it hasn't moved.

For background on the underlying tech, see the 5-minute voice cloning tutorial or the V2 launch post.

This article is general information, not legal advice. Laws change; mine is a snapshot from April 2026. If you're making content at any meaningful scale, consult a lawyer in the jurisdiction where you publish.