What We Learned About LLM Hallucinations

Published on Nov 20, 2025 | 4 mins read
Customer Support
AI & Automations

By Antanas Bakšys, CEO & Co-Founder of Ace Waves

Many still view LLM hallucinations as a strange glitch in modern models. In reality, language models hallucinate because it’s part of their design.

We learned this the hard way. Back in 2023, during our first attempts to build AI customer support agents, it kept “freestyling” whenever it lacked context. It invented policies, skipped steps, improvised actions, picked the wrong tools, not because something was broken, but because that’s simply how these models operate. They predict plausible text and lack understanding of rules, risks, or business procedures.

People expect LLM to behave like a deterministic system when the underlying architecture was never built for correctness - if you give it a workflow to follow, it will follow the pattern, not the procedure.

Our turning point at Ace Waves came when we stopped trying to force the model into behaving and instead started engineering the system so it simply couldn’t turn a hallucination into a real action. This is the part that most companies skip and the reason so many early attempts failed.

Our AI agents follow each client’s existing support procedures, using their systems and real business data, not model guesses. At the core is Agent Procedures - our multi-agent orchestration engine that tells each AI agent exactly what to do, when to do it, and which tools it may use. Our guardrails layer then validates every action and blocks anything outside the defined workflow, including prompt injection attempts. Finally, a supervisor layer oversees the entire process, and if confidence drops, AI agents escalate to a human instead of guessing.

The moment this clicked was earlier this year. One of our agents went live with a customer and quietly resolved thousands of cases without drifting or improvising. Same model family as the versions that previously broke - completely different behavior because the system was finally right.

This only became possible recently. In 2023, the models, the tool-use capabilities, and the multi-agent frameworks simply weren’t mature enough. You could have the right ideas, but the ecosystem wasn’t ready to support them.

AI hallucinations won’t go away. The real work is making sure they never escape into the real world. That’s the architecture we’ve been building, and it’s the reason our AI agents can now operate with the kind of reliability that used to feel impossible.

Why does this matter for customer-facing companies?

Customer support is extremely unforgiving. A human agent making up a policy gets fired. An AI system making up policies destroys trust overnight. Support has real consequences: refunds, cancellations, fraud risk, lost orders and sensitive data. You can’t deploy a probabilistic system into this world unless you’ve engineered every path it can take and every path it must never take.

The companies that move fast without this understanding will face high-profile failures. The companies that get it right will end up with something far more powerful than automation: they’ll have a support operation that is immediate, accurate, multilingual, infinitely scalable, and economically impossible to compete with using humans alone.

That’s why all of this matters. Not as an academic argument about hallucinations, but as the difference between AI being a liability and AI becoming the core engine of your customer operations for the next decade and beyond.

© AceWaves, all rights reserved