Financial Services Rules for Generative AI: Model Risk Management and Fair Lending
Susannah Greenwood
Susannah Greenwood

I'm a technical writer and AI content strategist based in Asheville, where I translate complex machine learning research into clear, useful stories for product teams and curious readers. I also consult on responsible AI guidelines and produce a weekly newsletter on practical AI workflows.

7 Comments

  1. Tasha Hernandez Tasha Hernandez
    March 24, 2026 AT 11:54 AM

    So let me get this straight - we’re spending millions to make AI behave like a tired loan officer who’s been staring at spreadsheets since 2003? And we call this innovation? The real scandal isn’t the AI hallucinating - it’s that we’re still using 1970s regulatory frameworks to police 2026 tech. They didn’t fix the system. They just gave it a fancy coat of paint and called it ‘compliance-grade.’

    Meanwhile, the guy in Des Moines got denied because his ZIP code had ‘high risk’ in a 2018 dataset. No one asked why. No one cared. Now we log everything. Great. So now we have 7 years of digital evidence that we’re all just automating discrimination. Bravo.

    And don’t get me started on the ‘human-in-the-loop’ charade. The human hasn’t touched a loan application since 2019. They just click ‘approve’ while scrolling TikTok. The AI does the work. The human does the paperwork. The bank does the PR. The customer? They get a form letter that says ‘insufficient data.’

    They say ‘guardrails.’ I say ‘gilded cage.’

    At least back in the day, the loan officer was a jerk to your face. Now? The AI is a jerk to your face… and then writes a 12-page audit trail about how it did it.

  2. Anuj Kumar Anuj Kumar
    March 24, 2026 AT 13:21 PM

    They say AI is biased but they never say who trained it. Who owns the data? Who wrote the rules? I bet it’s the same people who told us subprime loans were safe. Same banks. Same regulators. Same lies.

    They banned zip codes but what about phone prefixes? Or email domains? Or the time you applied? All of it points to the same thing. They don’t want to fix bias. They want to hide it better.

    And now they want logs for 7 years? That’s not compliance. That’s surveillance. They’re building a digital prison for customers and calling it ‘transparency.’

    One day they’ll admit it - AI didn’t make the system unfair. The system made AI unfair.

  3. Christina Morgan Christina Morgan
    March 25, 2026 AT 03:26 AM

    I love how this post breaks down the real issues without the usual tech-bro fluff. So many people think AI is magic, but it’s just math with a fancy interface. And math doesn’t care - it just repeats what it’s seen.

    The VALID framework is actually brilliant. It’s not sexy, but it’s necessary. Validation, avoidance, limitation, transparency, documentation - if every company followed these five steps, we’d be in a much better place.

    Also, props to that credit union that caught the bias early. That’s the kind of humility we need more of. Not ‘we’re too big to fail’ - but ‘we’re small enough to fix.’

    And yes, the human review bottleneck is real. But if you’re not willing to slow down to do it right, you’re not ready to serve people. Period.

    AI won’t save banking. But careful, thoughtful, human-led AI? That might.

  4. Nathan Pena Nathan Pena
    March 26, 2026 AT 00:50 AM

    The entire premise is fundamentally flawed. You cannot achieve determinism with generative models - that’s like demanding a jazz musician to play the same solo every time. It’s a category error. LLMs are probabilistic by design. To force them into deterministic boxes is not compliance - it’s intellectual surrender.

    The ‘compliance-grade AI’ movement is a regulatory arms race disguised as innovation. It’s not about safety. It’s about liability shielding. Every requirement - traceability, logging, human validation - exists not to protect consumers, but to protect executives from criminal liability.

    The $2.3 million cost? That’s the price of moral hazard. The real cost is innovation stagnation. Startups can’t afford this. Only megabanks with legal teams the size of small countries can play. This isn’t regulation. It’s market consolidation by bureaucratic fiat.

    And don’t get me started on the ‘human-in-the-loop.’ A compliance officer who spends 160 hours training to approve an email? That’s not oversight. That’s performance art.

    Regulators aren’t preventing harm. They’re preventing accountability - by creating a system where no one can be blamed because everyone is responsible.

  5. Mike Marciniak Mike Marciniak
    March 27, 2026 AT 00:29 AM

    They’re lying. They say they’re monitoring bias but they’re just monitoring for audits. The real bias is in who gets to define ‘fair.’ The same people who wrote the rules in 1977. The same people who still live in gated communities. The same people who never got denied a loan.

    Every log, every validation, every ‘compliance-grade’ system - it’s all smoke. They don’t care if the AI is fair. They care if it looks fair on paper.

    And that $12.7M fine? That’s the cost of getting caught. The ones who aren’t caught? They’re still making money.

    They’re not fixing AI. They’re just making sure it doesn’t get caught.

  6. VIRENDER KAUL VIRENDER KAUL
    March 28, 2026 AT 08:10 AM

    It is imperative to note that the regulatory architecture underpinning AI deployment in financial services remains woefully inadequate despite the introduction of frameworks such as VALID. The requirement for seven-year retention of logs under SEC Rule 17a-4 is not merely burdensome - it is functionally obsolete in an era of exponential data growth.

    Furthermore, the insistence upon determinism in probabilistic systems represents a fundamental misunderstanding of machine learning theory. One cannot mandate predictability in systems designed for stochastic output without rendering them functionally inert.

    The human validation requirement, while legally defensible, introduces systemic latency that undermines competitive viability. The 22% increase in response times cited is not an anomaly - it is the inevitable consequence of misaligned incentives between regulatory compliance and customer service.

    It is therefore my professional opinion that the current paradigm is unsustainable. Either the regulatory framework must evolve to reflect computational reality - or financial institutions will migrate operations offshore to jurisdictions with proportionate oversight.

  7. Mbuyiselwa Cindi Mbuyiselwa Cindi
    March 29, 2026 AT 10:12 AM

    I work in a small credit union in Cape Town and we just rolled out a basic AI tool for loan application triage - nothing fancy, just filters for income, employment, and debt-to-income ratio. We stripped out every possible identifier - no zip code, no name, no address - just numbers.

    We trained it on our own data, not some public model. And we made sure every decision had a human double-check. Took us 4 months. Cost us $15k. But now? Our approval time dropped from 7 days to 48 hours. And our denial rate for qualified applicants? Down 30%.

    It’s not perfect. But it’s better than before. And we didn’t need a $2M compliance team to do it.

    Don’t let them make you think this has to be complicated. Sometimes, the best AI is just a simple tool + good people.

Write a comment