There’s no question that the NO FAKES Act is trying to address a real problem. AI has advanced to the point where voices and faces can be cloned with uncanny accuracy, and the consequences aren’t theoretical anymore. We’ve already seen deepfakes cause reputational harm, manipulate the public, and create entirely fabricated content that looks and sounds real. Regulation is overdue. The NO FAKES Act is one of the first major efforts to tackle this head-on.
Introduced by Senators Chris Coons and Marsha Blackburn, the bill would make it illegal to create or distribute AI-generated replicas of someone’s voice or likeness without their permission. It’s a bold attempt to establish legal protections where currently there aren’t many. Backing it are some of the biggest names in entertainment: SAG-AFTRA, the RIAA, the Motion Picture Association, and now YouTube, which publicly endorsed the legislation and praised its balance of personal rights and platform responsibility.
But that’s where the conversation gets complicated.
YouTube, for all its talk about creator protection, doesn’t have the cleanest track record. For years, its Content ID system has favored big rights holders while leaving smaller creators in the dust. And in this new AI era, that same dynamic is playing out again. Recently, major studios started claiming ad revenue from AI-generated fake movie trailers on YouTube. These weren’t leaks or stolen material. They were speculative edits, fan-made mashups, entirely original videos using AI tools and stock footage. Instead of removing the videos, the studios simply claimed the money. YouTube went along with it.
Legally, it’s allowed. Ethically, it’s murky. And creatively, it raises some serious questions. If a fan creates a fake trailer using AI to explore what a sequel to an old film might look like, is that theft? Or is it a digital collage, an idea brought to life with new tools? That’s where this conversation needs more nuance.

Because while the NO FAKES Act is trying to protect people from abuse, there’s a risk that it ends up being another tool of control. A tool that, like the DMCA before it, could be used to suppress creativity under the banner of enforcement. The DMCA’s safe harbor provisions gave platforms like YouTube protection from liability, provided they removed infringing content when notified. The result has been years of copyright claims, many of them questionable, where creators often back down—not because they’re wrong, but because the fight isn’t worth it.
The NO FAKES Act proposes a similar system for AI-generated likenesses. Notice and takedown. Platforms get immunity if they remove the offending material quickly. In theory, that sounds fine. In practice, it could lead to over-policing. Especially when the bill encourages platforms to prevent re-uploads, which may require constant scanning and filtering. If this turns into a “notice and staydown” system, we could see legitimate creative work disappear without due process.
And here’s the kicker—your likeness, your face, your voice? You don’t actually own that in the same way you own a copyright. There’s no federal copyright protection for being you. What you have is the “right of publicity,” a legal concept that lets you control how your identity is used commercially. But it varies by state and doesn’t always come with the same protections that copyrighted material does. The NO FAKES Act tries to fix that by creating a consistent federal standard. That sounds like progress, but it also opens the door to aggressive claims, especially if platforms or public figures start treating every impersonation or parody as a violation.
Imagine a satirical video that uses an AI-generated voice of a celebrity to make a point about fame. Or a documentary that re-creates missing footage using AI narration to fill in the gaps. Is that illegal now? Or still protected by fair use? The bill doesn’t make that clear. And the fear is that in the absence of clarity, platforms will play it safe and side with whoever has the most legal firepower.
During the 2023 SAG-AFTRA strike, AI was a lightning rod. Union leaders warned that studios were scanning actors to build digital replacements, and that those scans could be reused forever without compensation. The public response was strong, and rightfully so. But some of those claims were based on misunderstandings or, at the very least, stretched interpretations of standard VFX practices. No retractions were issued. And not long after the strike ended, SAG-AFTRA signed agreements with AI voice companies that let performers license their voices for synthetic use. That’s not necessarily a contradiction, but it does show how quickly the conversation can shift once the deals are on the table.
This is why the NO FAKES Act needs scrutiny. We do need rules to stop abuse. But rules without nuance can easily become weapons. We’ve seen how the DMCA, intended to protect copyright, has been used to silence criticism, stifle fair use, and punish creativity. If the NO FAKES Act follows that same path, we’ll be back here in a few years talking about the unintended consequences.
Regulation is important, especially now. But we can’t lose sight of what makes creative expression valuable in the first place. AI might be the new frontier, but the same old questions apply: Who controls the tools? Who benefits from the rules? And who gets to decide what’s allowed?
The answers aren’t simple. But the debate is necessary. And we owe it to creators, artists, and everyday people to make sure the future of media doesn’t become just another fight for control.
