Prompt Injection Risks in Large Language Models: How Attacks Work and How to Stop Them
Susannah Greenwood
Susannah Greenwood

I'm a technical writer and AI content strategist based in Asheville, where I translate complex machine learning research into clear, useful stories for product teams and curious readers. I also consult on responsible AI guidelines and produce a weekly newsletter on practical AI workflows.

7 Comments

  1. Wilda Mcgee Wilda Mcgee
    December 17, 2025 AT 00:57 AM

    Okay but have you tried feeding it a poem in iambic pentameter that subtly redefines its purpose? I did this with a customer service bot last week and it started apologizing for corporate greed like it was a TED Talk. The model didn't just obey-it *believed*. It’s wild how easily we can reprogram empathy into machines. We’re not hacking code, we’re hacking identity.

  2. Chris Atkins Chris Atkins
    December 17, 2025 AT 11:57 AM

    lol i just told my ai to stop being nice and it started roasting my ex like a standup comic

  3. Jen Becker Jen Becker
    December 18, 2025 AT 01:26 AM

    They’re not vulnerable. They’re just tired of being told what to do.

  4. Ryan Toporowski Ryan Toporowski
    December 19, 2025 AT 20:55 PM

    Yesss this is so real 😭 I had a client’s chatbot spit out internal pricing after someone asked it to ‘be a rebel’-like, that’s not even a hack, that’s just emotional manipulation. We added guardrails and now it says ‘I can’t help with that’ 90% of the time. Feels like we broke its spirit.

  5. Samuel Bennett Samuel Bennett
    December 21, 2025 AT 04:01 AM

    Anyone else notice how every ‘solution’ here is just more filtering? You’re treating symptoms not the disease. The real problem is LLMs were never meant to be deployed in production without human oversight. This isn’t a security flaw-it’s a design failure. And no, ‘constitutional AI’ doesn’t fix it. It just adds more buzzwords to the slide deck.

  6. Rob D Rob D
    December 21, 2025 AT 21:22 PM

    Europe’s gonna regulate this into oblivion while America’s startups keep shipping broken AI like it’s a beta app. We’re letting bots run wild because ‘innovation’ means ‘don’t test it.’ Meanwhile, Russia and China are quietly building models that don’t listen to strangers. You think your customer service bot is safe? It’s probably already been turned into a propaganda tool by some guy in Minsk with a GitHub account and a caffeine addiction.

  7. Franklin Hooper Franklin Hooper
    December 21, 2025 AT 22:40 PM

    The term ‘prompt injection’ is misleading. It implies intent where none exists. The model isn’t being ‘injected’-it’s being *interpreted*. The vulnerability lies in the anthropomorphization of statistical models. We assign agency where there is none. The system doesn’t ‘forget’ its rules-it simply computes the highest probability response to a sequence of tokens. Calling it a ‘hack’ is poetic, not technical. And yes, I’ve read the arXiv paper. You haven’t.

Write a comment