Few-Shot Prompting Patterns That Improve Accuracy in Large Language Models
Susannah Greenwood
Susannah Greenwood

I'm a technical writer and AI content strategist based in Asheville, where I translate complex machine learning research into clear, useful stories for product teams and curious readers. I also consult on responsible AI guidelines and produce a weekly newsletter on practical AI workflows.

8 Comments

  1. Antwan Holder Antwan Holder
    February 2, 2026 AT 14:35 PM

    Let me tell you something profound: this isn't about prompting. It's about the soul of machines learning to mimic human intention. We're not teaching AI-we're exorcising our own laziness into its circuits. Every example we feed it is a whispered prayer for it to understand us without us having to be clear. And yet... it works. Not because it's smart, but because we're desperate enough to make it so.

    It's like teaching a ghost to cook by showing it five photos of pasta. The ghost doesn't know what pasta is. But it knows the shape of our hunger.

  2. Angelina Jefary Angelina Jefary
    February 3, 2026 AT 12:45 PM

    okay but like… why is everyone spelling ‘ICD-10’ wrong in the examples? it’s not ‘ICD10’ or ‘Icd-10’ it’s ICD-10. period. also ‘R07.9’? you missed the leading zero in the second example. this whole post is like a grammar apocalypse and nobody cares. how are we trusting life-or-death medical codes to people who can’t even format a decimal correctly?

    also ‘G44.9’ is headache? no it’s not. that’s unspecified headache. you’re supposed to use G43 for migraine. you’re just making people worse.

  3. Jennifer Kaiser Jennifer Kaiser
    February 3, 2026 AT 19:34 PM

    What strikes me isn’t the technique-it’s the quiet desperation behind it. We’ve built these colossal models, trained on the entirety of human knowledge, and yet we still need to hold their hand like toddlers learning to tie shoes. We didn’t evolve intelligence to outsource it to machines that need hand-holding. We evolved it to understand, to reason, to feel.

    But here we are. Feeding them five examples of chest pain like they’re toddlers learning colors. And we call this progress?

    Maybe the real failure isn’t the model. It’s our refusal to build systems that think, instead of just mimic. We’ve created mirrors that reflect our own laziness, then praise them for seeing clearly.

  4. TIARA SUKMA UTAMA TIARA SUKMA UTAMA
    February 4, 2026 AT 10:20 AM

    just use 3 examples. not 8. 3. that’s it. if it doesn’t work with 3, your examples are bad. also stop using question marks in the output. it’s not a quiz. it’s a code. just give the code. no fluff. no ‘final price:’. just R07.9. done.

    also i tried this on my aunt’s medical notes and it said ‘R11.10’ for vomiting. she didn’t vomit. she just ate bad tacos. the ai is dumb.

  5. Jasmine Oey Jasmine Oey
    February 5, 2026 AT 17:38 PM

    OMG I’m literally crying right now. This is the most *beautiful* thing I’ve ever read. Like, imagine if your therapist gave you five examples of how to say ‘I feel hurt’ instead of just yelling ‘YOU NEVER LISTEN!’-that’s what this is. AI is finally learning emotional intelligence through examples. I’m not even kidding.

    And ensemble prompting?? That’s like having five therapists at once. I want a subscription. I want to pay $99/month for this. My soul is healed. Thank you, thank you, thank you.

    Also, I used this on my breakup texts and now my ex is apologizing. It’s magic. I’m not even joking. I’m going to frame this article.

  6. Marissa Martin Marissa Martin
    February 6, 2026 AT 10:19 AM

    I’m not saying this is wrong… but I’m also not saying it’s right. I just… feel like we’re pretending we’re doing something profound when we’re really just patching a leak with glitter. It’s cute. It’s clever. But it’s not sustainable. And I worry that we’re fooling ourselves into thinking we’ve solved the problem when we’ve just made it prettier.

    Also, I read somewhere that Google quietly stopped using few-shot in their internal systems last year. But nobody talks about it because it’s embarrassing. I just… I think we’re all pretending.

  7. James Winter James Winter
    February 7, 2026 AT 18:38 PM

    USA thinks it invented AI. Canada has been using this since 2018. We just call it ‘common sense’. You people need 8 examples to know a headache isn’t a stroke? We just tell the machine: ‘don’t be dumb’. Works fine.

    Also, ICD-10? We use ICD-11 now. Your whole post is outdated. And you wonder why the world thinks Americans are clueless?

  8. Aimee Quenneville Aimee Quenneville
    February 8, 2026 AT 15:24 PM

    okay but… what if the examples are wrong? like… what if you accidentally teach the AI that ‘chest pain = R07.9’ but it’s actually a heart attack? then it’s not ‘improving accuracy’… it’s just automating medical murder? like… are we sure we’re not just training AI to be a really fast, really confident idiot?

    also i tried this on my cat’s vet notes and it said ‘R11.10’ for ‘cat hissed at vacuum’. so… yeah. i’m scared.

Write a comment