Avoiding Proxy Discrimination in LLM-Powered Decision Systems
Susannah Greenwood
Susannah Greenwood

I'm a technical writer and AI content strategist based in Asheville, where I translate complex machine learning research into clear, useful stories for product teams and curious readers. I also consult on responsible AI guidelines and produce a weekly newsletter on practical AI workflows.

5 Comments

  1. Shivani Vaidya Shivani Vaidya
    March 25, 2026 AT 02:49 AM

    Proxy discrimination is one of those invisible harms that only becomes visible when someone gets crushed by it. The example with the women’s shelter volunteer is chilling-not because it’s malicious, but because it’s so logically consistent. The system didn’t need to know Maria was a woman; it just needed to know what patterns her words triggered. This isn’t a bug. It’s a feature of how LLMs optimize for correlation, not ethics.

    We need to stop treating AI fairness as a technical problem. It’s a moral one. If we can’t explain why a decision was made, we shouldn’t make it. Period.

    And yes, removing protected attributes is like locking the front door while leaving the window wide open. The model doesn’t care about our intentions. It only cares about what it can infer.

  2. Rubina Jadhav Rubina Jadhav
    March 25, 2026 AT 04:26 AM

    This is real. I saw a friend get denied a small business loan because her resume said she worked at a nonprofit. The system flagged it as ‘unstable income.’ She was a teacher. No one told them nonprofits aren’t unstable.

  3. sumraa hussain sumraa hussain
    March 25, 2026 AT 04:59 AM

    Okay, so let me get this straight-AI is basically reading between the lines of our lives like it’s some kind of psychic detective, and then making life-or-death calls based on… punctuation? University names? Volunteering at a women’s shelter?

    Bro. We built a machine that’s more invasive than your ex’s therapist. And we’re surprised it’s biased?

    I’m not mad. I’m just… deeply disappointed. Like, we could’ve built a system that helps people. Instead, we gave it a PhD in human prejudice and called it ‘innovation.’

  4. Raji viji Raji viji
    March 25, 2026 AT 12:29 PM

    Oh wow, another ‘AI is racist’ thinkpiece. Let me guess-you also think cats are secretly plotting world domination? This whole proxy discrimination thing is just statisticians crying because their regression models got too good.

    Here’s the truth: if you’re getting denied a loan because you used too many exclamation points or went to a ‘gender-skewed’ university, maybe you’re just not a good candidate. The model isn’t racist-it’s *realistic*. The data doesn’t lie. You do.

    And don’t even get me started on ‘counterfactual fairness.’ That’s just virtue signaling wrapped in math. If you want fairness, stop applying for loans in the first place. Or better yet-get a job that doesn’t require AI approval.

  5. Rajashree Iyer Rajashree Iyer
    March 26, 2026 AT 12:34 PM

    Think of proxy discrimination as the ghost in the machine’s soul. It’s not the data that’s evil-it’s the silence between the numbers. The unspoken histories. The redlined streets, the gendered universities, the forgotten labor of women who cleaned homes while men built empires.

    The LLM didn’t learn bias. It remembered it. It inherited it. Like a child raised in a house where love was conditional, it learned to read the air-not the words.

    We don’t need more audits. We need a reckoning. Not with code-but with the world that coded us.

    And if you think fairness is a technical problem? You’re still sleeping. The revolution won’t be algorithmic. It will be existential.

Write a comment