Operating Model Changes for Generative AI: Workflows, Processes, and Decision-Making
Susannah Greenwood
Susannah Greenwood

I'm a technical writer and AI content strategist based in Asheville, where I translate complex machine learning research into clear, useful stories for product teams and curious readers. I also consult on responsible AI guidelines and produce a weekly newsletter on practical AI workflows.

5 Comments

  1. Antonio Hunter Antonio Hunter
    February 8, 2026 AT 14:29 PM

    It’s funny how often we talk about AI as if it’s this alien force that’s going to upend everything, when really it’s just a really, really good assistant that’s finally learned how to read between the lines. The real shift isn’t about replacing humans-it’s about removing the friction that’s been clogging up workflows for decades. I’ve seen teams where people spent half their week just chasing approvals, filling out forms, or explaining why a $120 expense was ‘necessary.’ Gen AI doesn’t care about your justification-it just checks the policy, cross-references past approvals, and moves on. The human part? That’s when someone finally has time to ask, ‘Wait, why are we even doing this?’ and then actually fix the root problem instead of just patching the symptom.

    It’s not magic. It’s just better context. And the companies that succeed aren’t the ones with the fanciest models-they’re the ones who stopped treating AI like a magic wand and started treating it like a new hire who’s weirdly good at paperwork but needs someone to explain the company culture.

    I’ve worked in places where the ‘AI team’ was a siloed group of engineers who handed out ‘solutions’ like pamphlets. The ones that worked? The ones where the finance guy who’d been processing invoices for 18 years sat down with the developer and said, ‘This rule doesn’t make sense because of X, Y, and Z.’ That’s the real innovation-not the algorithm, but the conversation.

    And honestly? The biggest win isn’t cost savings. It’s morale. People stop feeling like cogs when they’re not drowning in repetitive tasks. They start doing the work that actually matters. That’s the quiet revolution here.

    But yeah, data quality still matters. If your CRM is a graveyard of half-filled fields, your AI is just a very polite liar. And that’s on leadership, not the tech.

    Also, don’t call it ‘automation 2.0.’ That phrase alone tells me you’re still stuck in the old paradigm. This isn’t automation. It’s augmentation with autonomy.

  2. Paritosh Bhagat Paritosh Bhagat
    February 10, 2026 AT 09:36 AM

    Okay but let’s be real-this whole post reads like a corporate brochure written by someone who’s never actually deployed AI in the real world. You talk about ‘adaptive process intelligence’ like it’s some holy grail, but I’ve seen these systems crash because a vendor changed their invoice format by one space. And then what? The AI ‘surfaces the issue’-great, now I have to manually fix it AND explain to my boss why the ‘smart system’ failed. And don’t get me started on ‘prompt engineering skills.’ If your manager needs a PhD in prompt design just to get a summary, you’ve already lost. Real work doesn’t come with instruction manuals.

    Also, ‘let AI own the whole process’? That’s cute. Until it approves a payment to a vendor in a sanctioned country because the PDF scan misread ‘Iran’ as ‘Iran.’ Then suddenly you’re in a federal investigation. Who’s liable? The AI? The ‘compliance officer’ who didn’t even know the system was making decisions? No. It’s you. And me. And everyone who trusted a black box with our reputations.

    And let’s not pretend this is about ‘freeing humans.’ It’s about cutting costs. If AI reduces headcount by 30%, guess who gets the bonus? Not the people who trained it. Not the ones who cleaned the data. The execs. That’s the real operating model here: efficiency through displacement.

    Also, ‘Hybrid Model’? Sounds like a fancy word for ‘we’re too scared to go all-in but too lazy to do it right.’

  3. Nick Rios Nick Rios
    February 12, 2026 AT 03:13 AM

    I think Paritosh raises a valid concern, even if the tone is a bit harsh. The risks are real. I’ve worked on two Gen AI pilots-one succeeded because we started small, with clear ownership, and had a fallback protocol. The other failed because we told everyone ‘trust the system’ and then didn’t train them on how to question it. The moment the AI made a bad call and nobody knew how to override it, trust evaporated. It’s not about whether AI is good or bad-it’s about whether we’ve built the scaffolding around it to make it safe, understandable, and repairable.

    Also, I’d push back gently on the idea that ‘AI owns the process.’ That’s a narrative that sounds great in a keynote, but in practice, ownership still needs a human anchor. Maybe not for every invoice, but for the edge cases. The AI should be the first responder, not the final authority. Humans are still the last line of defense against catastrophic error. And that’s not a weakness-it’s a design feature.

    And yes, data quality is everything. I’ve seen AI models trained on 10,000 invoices… that were all from one department. The system learned that ‘travel’ meant ‘conferences’ and never considered ‘emergency repairs.’ That’s not intelligence. That’s bias with a neural network.

    The goal isn’t to replace humans. It’s to make the system resilient enough that humans can focus on what’s broken, not what’s routine.

  4. Amanda Harkins Amanda Harkins
    February 13, 2026 AT 08:44 AM

    I think we’re all forgetting that this isn’t about tech. It’s about power. Who gets to decide what gets automated? Who gets to say what counts as an ‘anomaly’? The engineers? The compliance officers? The managers who hate doing paperwork? The AI doesn’t care-it just follows the patterns it’s fed. And if your company’s historical data shows that only managers in the New York office ever override approvals, then the AI will learn that ‘real decisions’ happen there. And suddenly, your ‘neutral’ system is reinforcing hierarchy.

    Also, ‘prompt engineering’? Yeah, good luck teaching your sales team to write a prompt that extracts sentiment trends. They’re not going to do that. They’re going to say, ‘Make this email sound nicer.’ And the AI will give them something that sounds like a LinkedIn post written by a 14-year-old.

    And ‘celebrating wins’? Cute. But if your team’s still getting KPIs based on how many tickets they close per hour, they’re not going to trust an AI that takes 30 minutes to make a decision, even if it’s 99% accurate. You can’t change how people work without changing how they’re measured.

    Also, I’m tired of hearing ‘AI frees people to do creative work.’ What if I just want to do my job without being told I’m ‘unfulfilled’ because I’m not ‘strategizing’? Not everyone wants to be a designer. Some people just want to process invoices and go home.

  5. Jeanie Watson Jeanie Watson
    February 13, 2026 AT 09:55 AM

    This whole thing is just corporate buzzword bingo with a side of hype.

Write a comment