How we help

What we do

Explore

Becoming a Frontier Firm: How We’re Re-Engineering Cloud Direct for the Agentic Era

7 min read
Share:

By Paul Sells, CTO, Cloud Direct

Artificial Intelligence is already reshaping how organisations operate, and the shift towards the agentic era is only accelerating that change. But the real transformation isn’t just technical it’s human: new skills, new ways of working, and the confidence to collaborate with agents day-to-day.

I felt that pressure early on. Teams were already running experiments, customers were seeking guidance we hadn’t yet standardised, and Microsoft was advancing rapidly into agent-led transformation. We needed a unified engine that could build capability across our people, govern risk, and convert experimentation into repeatable value.

The solution was to build our AI Centre of Excellence: a structure designed to help us operate as a Frontier Firm before we ever advised customers to do the same.

Our AI CoE focuses on four core priorities:

  • Governance and Safety: Making sure every AI initiative is responsible, secure, and compliant.
  • Value Identification: Prioritising AI investments based on measurable business impact – not hype.
  • Agentic Operating Model: Shifting the company towards human-led, agent-operated workflows.
  • Blueprints and Reusability: Turning successful internal patterns into customer-ready accelerators.

Foundations that make experimentation safe (and easy)

Governance only works if it accelerates delivery rather than slowing it down. Alongside policy, we established practical “landing zones” for AI work: approved tooling and environments, identity and access patterns, data classification and handling rules, logging and audit expectations, and a lightweight route from prototype to production. The goal was simple, make the right thing the easy thing, so teams can test ideas quickly without compromising security, privacy, or trust.

Our agent-ready transformation

Early momentum matters in any transformation, and we were intentional about the projects we selected to kickstart ours. We knew we needed to focus on initiatives that were real, visible, and achievable. These early projects were intended to teach us quickly what genuinely worked across different teams.

Agent project selection

Our first projects were selected based on defined attributes:

  • High visibility – to build confidence across the organisation
  • Diverse tooling – to accelerate our understanding of what works
  • Cross-functional – to encourage collaboration over traditional business silos
  • Genuine business and customer needs – to ensure relevance and impact.

To keep momentum grounded in outcomes that matter, each use case had a simple measurement plan from day one, cycle time reduction, fewer handoffs, improved first-time resolution, and ultimately colleague and customer experience. Even where benefits were initially directional, agreeing how we’d track impact helped us prioritise what to scale.

These use cases included a Sales Administration Agent, a Customer Service Chatbot, an Alert Triage Agent, and several focused Copilot‑based agents. Each one addressed an actual operational need and helped us learn what “good” really looked like.

No moonshots. Just real, impactful projects.

AI project highlight: Inside the Alert Triage Agent

Business problem

We built this agent to tackle a challenge most operational teams know all too well: alert fatigue. Engineers were spending too much time gathering context, correlating logs, and determining next steps. Meanwhile queues continued to grow at pace, and incidents bounced between teams delaying their resolution. The investigation work was important but repetitive, and it slowed down meaningful response.

Approach

Our solution was an agent that automatically enriched each incoming alert with the telemetry and history responders needed. It analysed the likely cause, suggested recommended actions, and delivered a concise summary directly to our response platform.

Impact

The early signal was strong: triage became more consistent, incidents required fewer handoffs, and responders started investigations with clearer context. While we’re still maturing the metrics, the directional benefit is obvious, engineers spending less time hunting for information and more time solving problems.

What we learnt

One of the biggest hurdles we faced early on was data consistency. Alert payloads were varied from source to severity. This is where structured normalisation and resilient parsing became essential to ensure that messy data could be handled accurately and returned usefully. Once engineers saw the quality and trustworthiness of the agent’s summaries, adoption increased and the team leant into the workflow change.

This learning now shows up directly in how we guide customers: we start with data foundations and operating rhythms before we start building agents. In practice that means helping teams normalise the signals they already have, define what “good” looks like, and put guardrails around access and actions, so agent outputs are trusted, auditable, and adoptable.

Unexpected moments of the agent transformation

Roadblocks

Like any ambitious programme, we hit friction points, including:

  1. Data Foundations slowed down early agent experiments. We overcame this by tightening governance, data classification, and access boundaries.
  2. Cross team coordination was difficult until the COE created a clear operating model for roles, ownership, and escalation.
  3. Tooling maturity shifted weekly as we gained more experience. We leaned into customer zero iteration, expecting churn rather than resisting it.

Exceeded expectations

We also encountered some welcome surprises throughout the transformation:

  1. The appetite for agentic work across the business was much greater than expected. As people saw agents successfully perform tasks, teams started proactively identifying opportunities.
  2. Value of small prototypes, interestingly, some of our biggest wins came from small, low‑fidelity demos. These simple demos unlocked executive alignment far more effectively than lengthy documentation ever could.

Enabling people to manage agents

One of the most important realisations in our journey was that becoming a Frontier Firm is primarily an exercise in behavioural change and not a technical one. People needed to learn how to manage, refine, and collaborate with agents. To support this, we championed micro‑learning, practical experimentation, and a storytelling‑led approach that helped demystify agents for the entire organisation, not just technical teams.

Deploying Microsoft 365 Copilot sped up this cultural shift, moving employees from passive users to active designers of their own agents. Throughout all this, Responsible AI remained central. Embedding judgement, ethics, and data protection from the beginning ensured that innovation never compromised trust. Together, these efforts created a sense of psychological safety where teams felt empowered to experiment.

Governing adoption

As experiments increased, we needed to stay focused without adding extra bureaucracy. We created an adoption guardrail framework aligned with our AI responsible use policy to achieve this goal. This framework included:

  • Clear use case criteria
  • Data and security checks
  • Measurable success metrics
  • Governance checkpoints
  • A cadence for reviewing value and adjusting scope

This keeps innovation high but keeps risk tightly controlled.

Advice for fellow CTOs

For fellow CTOs beginning their own journey, I’d distil the approach into a few reusable principles:

  • Align on the operating model, not the tools. Agentic AI changes how work gets done; without executive buy-in and shared ownership, progress will slow.
  • Build an empowered AI CoE. Treat it as a change catalyst that raises capability, sets guardrails, and turns prototypes into repeatable patterns.
  • Design for the path to production. Avoid “pilot purgatory” by taking one use case end-to-end, using it as the proving ground for governance, telemetry, and adoption.

The future will be shaped by organisations that reorganise themselves around AI, turning agentic capability into competitive advantage and converting experimentation into business value. Our AI CoE, our early hero projects, and the cultural shift we’re driving are preparing Cloud Direct for this new era and the patterns we’ve proven internally are exactly what we now take to customers as practical, adoptable blueprints.

If you’d like to join us, explore our Frontier Impact Studio – a two-day deign-led workshop that helps organisations build AI right by identifying high‑value use cases, educating and aligning leaders, and turning AI ambition into an actionable roadmap.

Talk to our experts

Talk to our experts

Get a call back from one of our team to talk about your business.

This field is for validation purposes and should be left unchanged.

Read more like this