In the race to integrate Generative AI into business solutions, one truth has become clear to us at Certus3: context and orchestration are king.
Over the past 12 months, we’ve been evolving our AssuranceAmp platform to incorporate GenAI and large language models (LLMs). It’s been a hands-on, deeply technical, and often humbling experience. While the hype around AI has been loud, the real work begins when you try to embed it into a vertical business application that delivers useful, reliable outcomes.
What We Thought Would Be Hard… Wasn’t
Like many, we began with the assumption that incorporating Retrieval Augmented Generation (RAG) and designing the perfect prompt would be the hard part. That’s important work, no doubt—but what turned out to be much harder (and more important) was getting the right data into the model in a form it could understand, and doing that reliably and securely.
That’s where context and orchestration come in.
The Context Challenge: Business Data Is Not “Chat-Ready”
Real business data doesn’t come neatly packaged for AI consumption. It’s scattered across:
- PDFs, Word documents, spreadsheets, PowerPoint presentations
- APIs from line-of-business systems
- Internal databases
- Legacy tools and internal notes
- And let’s not forget: tribal knowledge in people’s heads
And even when you extract it all, it’s not formatted the way a web app or chatbot needs it. A web interface might expect data pre-filtered, visualised, or structured. A GenAI system, on the other hand, needs semantically meaningful, text-based inputs that provide background, nuance, and relevance.
Getting this right is crucial. LLMs don’t know anything—they’re stateless. That means every time they generate a response, you have to feed them the right context and it needs to be your relevant business context. Otherwise, they’ll hallucinate, or worse, mislead the user with confident-sounding nonsense.
The Orchestration Challenge: Making the AI Useful (and Trustworthy)
Once you have the data, the next hurdle is orchestration: how to pull the right context at the right time, and guide the LLM through a task or conversation in a structured way.
In AssuranceAmp, that meant:
- Embedding business logicto ensure responses are aligned with project assurance methodologies
- Building a retrieval-augmented generation (RAG)framework to dynamically fetch relevant context from diverse sources
- Maintaining stateacross a chat session—even though the underlying model doesn’t
- Ensuring data privacy and access controlswere respected at all times
- Orchestrating LLM-agent behaviourfor multi-step tasks (e.g., assess, summarize, recommend)
This isn’t just a tech problem—it’s a design and architecture challenge. It’s about how AI fits into your workflows, not the other way around.
The Real Takeaway
If you’re building GenAI into a real business application—not just a chatbot demo—our advice is this:
Start with context. Master orchestration. Then bring in the AI.
It’s not the model that makes the solution smart—it’s the way you prepare and feed it your data and the right data, at the right time, in the right way.
That’s the real work behind AssuranceAmp. And it’s the kind of work that turns AI from a novelty into something that actually moves the needle in critical business processes like project assurance, governance, and risk management.
To help us master this new discipline of data preparation, orchestration, and LLM-agent interaction, we’ve built an internal development and testing platform we call AgentAmp. It allows us to rapidly design and refine data flows, prototype agent behaviours, and, critically, to test everything in a structured and repeatable way. Testing, as it turns out, is just as important as orchestration—and that’s the focus of our next blog post.
In upcoming posts, we’ll share more of what we’ve learned—about document understanding, secure retrieval, agent frameworks, and how to make AI feel like part of the team (without making things weird).
If you’re on a similar path, we’d love to connect and swap notes.