Content Moderation Laws and Generative AI: Platform Duties and Safe Harbors
Susannah Greenwood
Susannah Greenwood

I'm a technical writer and AI content strategist based in Asheville, where I translate complex machine learning research into clear, useful stories for product teams and curious readers. I also consult on responsible AI guidelines and produce a weekly newsletter on practical AI workflows.

9 Comments

  1. Dmitriy Fedoseff Dmitriy Fedoseff
    February 28, 2026 AT 08:36 AM

    Let’s be real - we’re not just dealing with deepfakes anymore. We’re dealing with a world where truth is optional and perception is currency. The EU and Canada aren’t overreaching - they’re recognizing that AI doesn’t care about consent, dignity, or democracy. If a platform lets a fake video of a politician inciting violence slide because ‘it’s just art,’ then they’re not a platform - they’re an accomplice. The watermarking systems? A start. But if we’re still relying on users to report abuse, we’ve already lost.

    And let’s not pretend Section 230 is sacred. It was written when the biggest threat was a guy spamming forums with ‘Lose weight fast!’ ads. Now it’s a child’s face on a pornographic body, generated in seconds. That’s not user content. That’s a manufactured crime. Platforms have to be held accountable - not because they’re evil, but because they’re powerful enough to stop it and choose not to.

    Transparency reports? Good. But they’re useless if no one audits them. Who’s checking if the AI that flagged a Muslim woman’s hijab as ‘suspicious’ was trained on data from 90% white, male, American moderators? Not enough. We need independent oversight, not corporate PR.

    And yes - I’m angry. Because while tech bros debate ‘free speech,’ real people are being doxxed, threatened, and traumatized by AI that costs $0.02 to generate. This isn’t philosophy. It’s survival.

  2. Meghan O'Connor Meghan O'Connor
    March 1, 2026 AT 17:25 PM

    Ugh. Another 3000-word essay on ‘AI moderation.’ Can we just agree that no one actually reads this stuff? I scrolled past the whole thing. All I got was ‘label it’ and ‘remove it’ and ‘oh but wait, what if it’s art?’

    Look. If I post a photo of my cat wearing sunglasses made with Midjourney? Cool. Label it. Done.

    If someone uses AI to make a fake video of my neighbor yelling racist slurs? Delete it. Sue them. End of story.

    Why does every article need 17 subheadings to say ‘don’t be a jerk’? Just say it. Simple. Done. I’m tired of being talked down to like I’m 12.

  3. Morgan ODonnell Morgan ODonnell
    March 2, 2026 AT 19:31 PM

    Honestly? I get both sides.

    On one hand, yeah - AI deepfakes are terrifying. I saw one last month of a guy I know pretending to confess to a crime he didn’t do. It looked 100% real. My stomach dropped.

    On the other hand, I’ve seen AI-generated art that’s beautiful. Poetry, music, even protest art that helped people feel seen. If we ban all of it because some people abuse it, we’re throwing out the baby with the bathwater.

    Maybe the answer isn’t more rules. Maybe it’s better education. Teach people how to spot fakes. Teach them to ask: ‘Who made this? Why? What’s the cost?’

    Platforms should help with that. Not just delete. Help us understand.

  4. Liam Hesmondhalgh Liam Hesmondhalgh
    March 4, 2026 AT 00:42 AM

    Canada and the EU are acting like they’re running a socialist utopia. Meanwhile, real people in the real world - the ones who actually work - are being crushed under bureaucracy. Who cares if an AI-generated video of a politician says something dumb? It’s just words. Let people decide what’s true. That’s what freedom is.

    And don’t get me started on ‘watermarks.’ You think some guy in Manila is going to care if his AI-generated meme has a tiny invisible tag? Nah. They’re gonna repost it anyway. This is all performative. A distraction from real problems - like inflation, crime, and bad healthcare.

    Stop pretending tech regulation = safety. It’s just control dressed up as protection.

  5. Patrick Tiernan Patrick Tiernan
    March 5, 2026 AT 16:53 PM

    So like… we’re supposed to label every single AI thing now? What if I make a pic of me as a dragon? Do I gotta put ‘AI-generated dragon’ in the caption? Jesus. This is why I quit social media.

    Also Section 230 is fine. If you can’t handle a fake video of your ex crying, maybe don’t post it in the first place? I’m not babysitting your emotional life.

    Also who wrote this article? It’s like they got paid by the word. I counted 12 ‘therefores.’

  6. Patrick Bass Patrick Bass
    March 5, 2026 AT 22:52 PM

    Just wanted to say - the part about bias in AI moderation is critical. I’ve had posts flagged because I use non-standard punctuation - like em dashes - and the system thought it was ‘suspicious.’ It’s not about the content. It’s about the form. And that’s dangerous.

    Also, the C2PA standard? Good idea. But if only big platforms use it, then small creators get left behind. We need open-source, interoperable tools. Not corporate walled gardens pretending to be public goods.

  7. Tyler Springall Tyler Springall
    March 7, 2026 AT 19:21 PM

    Let’s cut the pretense: no one actually wants ‘balance.’ What they want is control. The EU wants to own your digital identity. The U.S. wants to outsource its moral panic to Silicon Valley. And China? They just want silence.

    The real issue isn’t AI. It’s that we’ve outsourced truth to algorithms written by engineers who’ve never met someone from a rural village in Ireland or Saskatchewan. We’re not moderating content - we’re moderating culture.

    And we’re doing it badly. Because if your AI flags a protest chant in Gaelic as ‘hate speech’ because it doesn’t recognize the cadence - that’s not safety. That’s colonialism with a neural network.

    Stop pretending this is about law. It’s about power. And the people who wrote this article? They’re part of the machine.

  8. Colby Havard Colby Havard
    March 8, 2026 AT 08:52 AM

    It is, of course, necessary to recognize that Section 230 was never intended to be a perpetual shield against liability for algorithmically amplified content - particularly when the platform’s own models are generating, curating, or optimizing for virality. The jurisprudential evolution is not merely plausible - it is inevitable. Courts are beginning to distinguish between passive hosting and active curation - and AI-generated content, by its very nature, is curated by design. Therefore, the legal fiction that platforms are ‘neutral intermediaries’ is untenable in the context of generative AI systems that are trained, fine-tuned, and deployed by corporate entities with profit motives. This is not an overreach - it is a correction of a fundamental misalignment between 1996 statutory language and 2026 technological reality.

  9. Amy P Amy P
    March 9, 2026 AT 11:10 AM

    Okay I just had to reply to that last comment because I’m literally shaking - THIS IS SO IMPORTANT. I’ve been researching this for months and you’re 100% right. It’s not about ‘free speech’ - it’s about accountability. I work with survivors of deepfake abuse. They don’t need more ‘transparency reports.’ They need justice. They need platforms to be legally liable when their AI systems allow a video of a 14-year-old to be turned into porn and go viral for 72 hours before anyone notices.

    And yes - Section 230 is dead. It’s not a question of ‘if’ - it’s ‘when.’ And the people who say ‘but innovation!’ - where were you when the first child was traumatized? Where were you when the first election was manipulated by a fake audio clip? We’re not asking for perfection. We’re asking for responsibility.

    And if you’re still arguing about ‘censorship’ - go read the stories from Nigeria, India, and Brazil. AI isn’t just making lies - it’s making violence. And we’re not going to sit quietly while tech companies hide behind 28-year-old laws.

Write a comment