Content Moderation Laws and Generative AI: Platform Duties and Safe Harbors
Susannah Greenwood
Susannah Greenwood

I'm a technical writer and AI content strategist based in Asheville, where I translate complex machine learning research into clear, useful stories for product teams and curious readers. I also consult on responsible AI guidelines and produce a weekly newsletter on practical AI workflows.

1 Comments

  1. Dmitriy Fedoseff Dmitriy Fedoseff
    February 28, 2026 AT 08:36 AM

    Let’s be real - we’re not just dealing with deepfakes anymore. We’re dealing with a world where truth is optional and perception is currency. The EU and Canada aren’t overreaching - they’re recognizing that AI doesn’t care about consent, dignity, or democracy. If a platform lets a fake video of a politician inciting violence slide because ‘it’s just art,’ then they’re not a platform - they’re an accomplice. The watermarking systems? A start. But if we’re still relying on users to report abuse, we’ve already lost.

    And let’s not pretend Section 230 is sacred. It was written when the biggest threat was a guy spamming forums with ‘Lose weight fast!’ ads. Now it’s a child’s face on a pornographic body, generated in seconds. That’s not user content. That’s a manufactured crime. Platforms have to be held accountable - not because they’re evil, but because they’re powerful enough to stop it and choose not to.

    Transparency reports? Good. But they’re useless if no one audits them. Who’s checking if the AI that flagged a Muslim woman’s hijab as ‘suspicious’ was trained on data from 90% white, male, American moderators? Not enough. We need independent oversight, not corporate PR.

    And yes - I’m angry. Because while tech bros debate ‘free speech,’ real people are being doxxed, threatened, and traumatized by AI that costs $0.02 to generate. This isn’t philosophy. It’s survival.

Write a comment