Building AI Chatbots and Assistants with Vibe Coding and Retrieval Systems
Susannah Greenwood
Susannah Greenwood

I'm a technical writer and AI content strategist based in Asheville, where I translate complex machine learning research into clear, useful stories for product teams and curious readers. I also consult on responsible AI guidelines and produce a weekly newsletter on practical AI workflows.

8 Comments

  1. Nick Rios Nick Rios
    February 23, 2026 AT 15:40 PM

    Vibe coding feels like being handed a toaster and told to bake a soufflé. You get something warm and kinda edible, but you have no idea if it’s safe to eat or if the wiring’s gonna catch fire. I tried building a simple FAQ bot for my side project-worked for two days, then started hallucinating answers about our refund policy. Turned out it pulled from a deprecated doc. No one told me that. Now I’m just stuck staring at a blank screen wondering if I should’ve just written the damn thing myself.

  2. Amanda Harkins Amanda Harkins
    February 23, 2026 AT 21:09 PM

    It’s wild how we’re pretending this isn’t just AI glue. You type ‘make a chatbot’ and boom-you get a Frankenstein of APIs, endpoints, and half-baked logic. It’s not coding. It’s summoning. And like any good summoning, there’s a cost. The AI doesn’t care if your customer data leaks. It just wants you to say ‘more’ so it can keep generating. I’m not scared of the tech. I’m scared of how fast we’re normalizing blind trust in it.

  3. Jeanie Watson Jeanie Watson
    February 24, 2026 AT 21:35 PM

    So I tried Bolt. Made a bot in 20 minutes. It answered ‘Where’s my order?’ with ‘I don’t know, ask your mom.’ Then it crashed. Took me an hour to realize it tried to connect to port 8080 but our server runs on 3000. No warning. No explanation. Just ‘error: connection refused.’ I gave up. Maybe I’m just lazy. Or maybe this isn’t for me.

  4. Tom Mikota Tom Mikota
    February 26, 2026 AT 13:58 PM

    Oh wow, so now we’re just typing into a black box and calling it ‘productivity’? That’s not vibe coding. That’s outsourcing your brain to a chatbot that doesn’t even know what a semicolon is. And don’t get me started on the security. You think IBM’s report is scary? Try explaining to your CISO that your ‘customer support bot’ is sending raw support tickets to Anthropic’s servers because the AI ‘assumed’ it was okay. This isn’t innovation. It’s negligence with a UI.

  5. Mark Tipton Mark Tipton
    February 27, 2026 AT 21:52 PM

    Let’s be brutally honest: vibe coding is a Ponzi scheme disguised as a productivity tool. The early adopters? They’re not building apps-they’re building hype. The tools work for 10% of use cases, and the other 90%? They’re ticking time bombs. The fact that 73% of Fortune 500 CTOs refuse to deploy these systems isn’t a bug-it’s a feature. It’s the market screaming ‘this isn’t production-grade.’ And yet, we’re all pretending it’s the future. The future is a firewall. The future is code reviews. The future is not letting a language model write your authentication middleware. This isn’t progress. It’s a Trojan horse with a React frontend.

  6. Adithya M Adithya M
    February 28, 2026 AT 22:06 PM

    I built a bot for our internal inventory system using Windsurf. It worked great until it started returning negative stock numbers. Turns out the AI confused ‘in transit’ with ‘out of stock.’ I had to manually rewrite the logic layer. Took me 3 hours. But I saved 10 days of coding from scratch. So yeah, it’s messy. But it’s still faster than starting from zero. Use it as a scaffold-not a finished house.

  7. Jessica McGirt Jessica McGirt
    March 2, 2026 AT 04:48 AM

    Just a quick note: if you're building anything that touches customer data, please, please, please audit the generated code. I’ve seen bots that auto-forward emails to external domains because the AI ‘assumed’ that was part of the workflow. No one caught it until a client sent a complaint. Vibe coding is powerful, but power without responsibility is dangerous. Always review. Always verify. Always assume the AI made a mistake.

  8. Donald Sullivan Donald Sullivan
    March 2, 2026 AT 21:44 PM

    Stop acting like this is revolutionary. It’s just automation with a buzzword. You still need someone who knows what an API is to fix it when it breaks. You still need a dev to patch the security holes. You still need a lawyer to sign off on the compliance risks. So why not just hire one? Stop pretending you’re a developer because you told an AI to ‘make a chatbot.’ You’re not. You’re a prompt engineer with a credit card bill.

Write a comment