It took AI-generated porn to finally get Congress to agree on something.
Not climate change, not healthcare, not gun violence—deepfake smut. And now they’ve rushed out a law so broad and twitchy, it could end up nuking more memes than it stops creeps.
They’re calling it the TAKE IT DOWN Act, but let’s be real—it’s more like the “Take Down Everything Even Slightly Risky and Hope No One Sues Us” Act.
Signed into law by Donald Trump on May 19, 2025, this shiny new piece of legislation is being paraded around like a moral victory. A federal law cracking down on AI-generated revenge porn, deepfake nudes, and non-consensual sexual imagery? Sounds like a win, right? Something both sides of the aisle can actually high-five over without bursting into flames. And for victims—especially women who’ve had their faces stitched onto explicit content without consent—it absolutely is a step toward justice. That part shouldn’t be controversial.
But here’s the thing: laws written in panic rarely age well, and this one feels like it was stitched together by a Congressional committee who just discovered what AI is two weeks ago and are still convinced “deepfake” is a new type of porn category on Pornhub.
Meanwhile, just to remind you we live in America: as of May 20th, there have already been seven school shootings this year resulting in injuries or deaths. That’s seven incidents in five months where kids were actually shot in classrooms or on school buses. In 2024 there were 39 of these. In 2023? Thirty-eight. And yet somehow, that still doesn’t warrant a legislative consensus. But slap Melania Trump’s Be Best branding on an AI porn panic, and suddenly we’ve got federal criminal penalties on the books faster than you can say “content moderation.”
Let’s walk through the mess. The law makes it a federal crime to knowingly post non-consensual intimate imagery—real or AI-generated. The penalties?
- Up to 2 years in prison and/or fines if the target is an adult.
- Up to 3 years if the victim is a minor.
Also, if a platform doesn’t remove reported content within 48 hours of a victim’s complaint, they could face FTC enforcement, civil liability, and a PR nightmare. And let’s be honest—platforms don’t do nuance when the clock is ticking. They do mass deletions.

Now, don’t get it twisted: nobody with a shred of humanity is defending the sick freaks behind deepfake porn sites. But this law doesn’t just go after the creeps—it opens a legal Pandora’s box that might swallow up way more than just the predators.
Because let’s say you’re a filmmaker. A satirist. A TikTok comedian. Hell, a bored kid with access to ElevenLabs and a good Trump impression. You throw together a fake PSA with Melania AI-rambling about “deep morals” while Kanye and Biden do a duet in the background. It’s obvious parody. It’s meant to be dumb. And yeah—it uses AI. Under this law? You better hope it doesn’t piss someone off enough to file a complaint, because now the platform has 48 hours to decide whether your joke is protected speech or a federal liability. Spoiler: they’re probably just going to delete it and move on.
That’s the real problem here. Not the intention of the law—but its inevitable overreach. The second legal pressure is on the table, platforms stop playing defense and go full nuke mode. The goal is risk avoidance, not fairness. That funny-but-gross AI sketch? Gone. A deepfake art film about public identity? Gone. Satirical commentary using real faces to criticize political hypocrisy? Bye-bye.
The law’s wording is vague enough to be terrifying. “Non-consensual” is a slippery term in digital art. And when you pair it with AI, which is already murky legal territory, suddenly a huge chunk of internet culture gets caught in the blast radius. Not to mention the fact that platforms are going to lean on shitty automated moderation tools that already suck at nuance. Think of YouTube’s demonetization bot, but now it’s got the power of federal law behind it.
And don’t even get me started on the slippery slope. Today it’s porn. Tomorrow it’s parody. Then it’s criticism. And eventually, we end up in a place where anything synthetic, strange, or offensive gets pulled offline before anyone can say “fair use.”
The irony? This is happening under the same political banner that’s been screaming about censorship, free speech, and “cancel culture” for the better part of a decade. But the second AI art started targeting them—suddenly, regulation is patriotic. Suddenly it’s about morality. Protecting children. Restoring decency. And sure, there’s truth to that. But don’t act like this isn’t also about controlling the conversation.
Because make no mistake: the internet is about to get a whole lot dumber if laws like this become the template. Not safer. Not more ethical. Just… more boring. More sanitized. Less human.
Yes, deepfake porn is a crisis. But laws like this don’t just kill the cancer—they take out a lung and your sense of humor while they’re at it. If we don’t fight for nuance now, we’ll wake up in five years wondering why no one’s making weird, fearless shit anymore. Why every AI video looks like it came out of a marketing agency focus group. Why satire feels toothless. Why the internet feels dead.
It’s not just about protecting people. It’s also about protecting the messy, unpredictable, chaotic edge of creativity that makes culture worth a damn in the first place.
