Security Risks in LLM Agents: Injection, Escalation, and Isolation
Susannah Greenwood
Susannah Greenwood

I'm a technical writer and AI content strategist based in Asheville, where I translate complex machine learning research into clear, useful stories for product teams and curious readers. I also consult on responsible AI guidelines and produce a weekly newsletter on practical AI workflows.

1 Comments

  1. Bridget Kutsche Bridget Kutsche
    February 7, 2026 AT 10:58 AM

    Really glad someone laid this out so clearly. I've been telling my team for months that treating LLMs like APIs is a recipe for disaster. The moment you let them trigger workflows without output validation, you're basically handing attackers a remote shell. We implemented sandboxed execution last quarter and saw our incident rate drop by 80%. Not magic, just basic hygiene.

    Also, stop using regex to filter prompts. It's 2025. Use semantic intent classifiers. Guardrails AI is free, open-source, and actually works.

    And yes - adversarial testing weekly. Not monthly. Weekly.

Write a comment