
If you spend any time around AI right now, you will hear a lot about models: Autonomous reasoning. Multi-step planning. Tool use. Agent frameworks. RAGs. MCPs. It’s all fascinating. And to be fair, a lot of it is genuinely impressive. But there is a gap in the conversation that I keep coming back to: The gap between what AI can do, and what it actually needs to do when it goes live.
That moment where stuff gets real
On stage at the Chatbot Europe Summit in March 2026, Laura Ball (Zoom), and I talked about this idea of “theory vs production”. Because in theory, agentic AI is expansive. It can reason, plan, adapt, and orchestrate. But in production, the conversation changes very quickly. Now you’re dealing with service levels, governance, auditability, risk, escalation. Real people trying to get real things done safely and securely.
As we discussed in the keynote session, agentic AI in production needs to be more than just intelligent. It needs to be constrained, accountable and observable. And that’s where things start to get interesting...
Why things slow down
One of the most common patterns we’ve seen over recent years is that AI rollouts don’t necessarily fail… They just slow down.
A proof of concept gets built, it works well enough, and everyone gets excited. Then someone asks how it connects to the CRM. Or what happens if it gives the wrong answer. Or how decisions are logged. Or who actually owns it once it's gone live. And suddenly, progress stalls: Integration is harder than expected. Data is spread across too many systems. Guardrails are not clearly defined. Governance kicks in. Nobody quite owns the outcome.
None of this is necessarily surprising. It's just the reality of trying to take something new and make it work inside an existing organisation. The reality is that AI capability is moving incredibly fast, but most organisations are not. And that maturity gap is the challenge.
A simpler way to think about it
The way we have started to think about this is quite simple: If you want AI to work in production, you need to manage it like you manage people. This concept tends to land quite well, because most leaders understand what that looks like.
When you bring someone into a team, you don't just give them full autonomy and hope for the best. You define their role. You give them access to the right systems. You train them. You set expectations. You monitor how they are performing. You coach them. You improve them over time.
Why would AI be any different?
In production environments, agentic AI needs boundaries. It needs to know what it is allowed to do. What it is not allowed to do. When it should escalate. How it should behave when it is unsure. It also needs to be visible. You need to be able to see what it is doing, understand why it made a decision, and improve it over time. That’s what makes it usable.
1. Create the right working conditions
Once AI starts doing real work, you cannot get away with a loose setup. You need to know where it is getting its data from. You need to control what systems it can interact with. You need to apply guardrails consistently. You need to be able to test it and observe it.
Without that, you’re not really managing AI. You are just experimenting with it.
Because when you’re using different tools for testing, different places for guardrails, different dashboards for analytics, and different environments for deployment, you introduce friction and risk. It slows things down, makes governance harder, and reduces confidence.
Bringing all of that together into a single control plane makes a huge difference. It means you can manage data, tools, guardrails, testing, and deployment in one place. It sounds simple, but it changes how quickly teams can move. This is where solutions like Zoom Virtual Agent are evolving quickly. Not just in terms of capability, but in how manageable they are becoming.
2. Define what and how you’ll measure
Another area that often gets oversimplified is measurement. For example, containment rate gets a lot of attention. And it’s useful as a starting point, but measuring a chatbot based solely on containment rate is like measuring a member of staff based on their attendance. It tells you something, but not enough.
If we go back to the idea of managing AI like people, we should be asking better questions:
Is it actually resolving things properly?
Is it reducing the workload on the team?
Is it escalating at the right time?
Is it making journeys smoother?
Is it generating useful insight?
That’s where the real value of Agentic AI lies.
So, what’s changed?
There’s a reason this conversation is changing now. The language models are good enough. The APIs are there. Governance is starting to mature. The platforms are improving. Organisations are beginning to trust AI, but in a more controlled way. And that is what creates the inflection point.
Because now, the limiting factor is not the technology. It’s how well you deploy it.
The optimal AI deployment model
The key is not trying to do everything at once:
Narrow the scope hard. One or two use cases maximum. Something high volume, well understood, and measurable.
Connect it to the systems it needs and define what it is allowed to do. This is where a lot of the real work sits.
Test it properly. Not just happy paths, but edge cases. Things you know will cause problems. You actively try to break it (often referred to as red teaming, or adversarial testing).
Go live in a controlled way. With people watching it, supporting it, and improving it.
Measure, learn, invest, develop, and iterate. One or two use cases at a time.
When you step back, it looks a lot like onboarding a new member of the team, right? Because that’s exactly what you're doing!
And this is the shift that matters the most. Most organisations are still treating AI like a magic feature. Something you switch on. Something that should just work (because that’s what the demo suggested). But that’s not how it plays out in reality. Agentic AI is much closer to a new member of the team than it is to a piece of software:
It needs structure.
It needs boundaries.
It needs oversight.
It needs investment.
It needs leadership.
And when you treat it that way, something interesting happens… It starts to deliver! Not just in demos. Not just in POCs. But in real environments, with real users, driving real outcomes.
That’s the difference between theory and production. That’s where most of the value sits, and - for most - it’s still waiting to be unlocked.
Work with us
If you’re starting to think about how agentic AI fits into your organisation, the next step isn’t another proof of concept. It’s understanding how to take it into production, safely and at pace.
That’s where Acceleraate and Zoom come in.
Zoom provides the platform: A modern, AI-first environment that brings together data, tools, guardrails, and orchestration into a single, manageable layer, enabling deployment across all of your business touchpoints.
Acceleraate focuses on the hard part: Designing, deploying, and operating AI in real environments, with the governance, integration, and iteration needed to make it stick.
If you want to move beyond theory and start delivering real outcomes, we’d be happy to share what that looks like in practice.
Move from AI theory to production
About the author
News & insights.
Part of Founded Group Limited




