How we help

What we do

Explore

In the final part of this three-part blog series on Agentic AI, we outline 6 Key considerations before you get started. If you missed them, part one provided an introduction to Agentic AI, while part two looked at how Azure AI Foundry Agent Service can help you get started.

In one respect, getting started with Agentic AI couldn’t be easier. Create an Azure AI Foundry project in your Azure subscription, and off you go.

But we see this time and time again with new technologies. A lot of time and effort goes into exploration and experimentation, and its largely wasted. That’s because the experimental uses aren’t aligned to driving business value – and since the experiments don’t deliver business value, things don’t go much further.  

So, before diving head-first into Agentic AI, it’s wise to take a step back and consider some of the bigger issues. Agentic AI is a fundamental shift in how decisions are made, tasks are executed, and systems evolve.

These six key areas of consideration will help you to embark on your use of Agentic AI responsibly and effectively.

1: Strategy first

Define clear goals: Understand your organisation’s business challenges – whether it’s automating customer service, optimising logistics, or improving sales processes. Most departmental heads will share their workflow and process pains with you and Agentic AI works best with a well-scoped mission.

Prioritise pragmatically: Start with smaller problems – larger more ambitious ones can follow – that also enable tangible and measurable outcomes to be realised. This will also enable you to build trust.

Autonomy versus control: How does your organisation (more specifically, its leadership) feel about agents making decisions? How much oversight will they want? Azure AI Foundry Agent Service allows for both, but it’s important to define requirements and boundaries before you start work.

2: Technical readiness

Microsoft Azure: To use Azure AI Foundry Agent Service, you’ll need an Azure subscription with the right roles set up to create projects and agents. These will include Azure AI Account Owner, Contributor, and User.

Check infrastructure compatibility: Ensure that your systems can support agentic workflows. Consider which APIs, databases, or systems your agents will need to interact with and check that they can. Azure AI Foundry Agent Service supports over 1,400 Logic Apps, and has native integration with SharePoint, Fabric, and Azure Storage so this may not be an issue.

Evaluate data quality: AI agents need clean, structured, and relevant data to reason effectively: Garbage in, garbage out. In the short-term data quality may determine the type of workflow automations you can pursue. It may also dictate future initiatives around data quality.  

Model selection: Agents rely on an underlying large language model and there’s a wide choice available withinAzure AI Foundry Agent Service. Your choices should be based on task complexity, latency tolerance, and cost profile – you can vary model choice by project based to suit these requirements.  

3: Secure by design

Shift-left security: Integrate security from the word go. Consider the reputation of your provider, their security, how well it integrates with your security model, as well as data access, and tool permissions.

Set boundaries: Be clear which data agents can access, share, and retain. Build in constraints to prevent bias, misuse, or unintended consequences. There are many frameworks, like KPMG’s Trusted AI model, which can help.

4: The human touch

Design for collaboration: Your AI agents should be complementing and not replacing human judgment, so build interfaces that allow human guidance and intervention.

Training: As with any new technology, good training is important. Make sure that stakeholders and users understand how agents work, what they can and can’t do, and how to interact with them responsibly.

5: Governance

Set AI use policies: As well as setting boundaries, you should agree acceptable use cases – and since these will likely change over time, a review process – and escalation protocols. This should involve legal and operations as well as IT.

Accountability: Although Agentic AI can act, humans should remain accountable for outcomes, so decide who’s responsible for agent decisions.

Auditability and transparency: Ensure that agents’ actions and decisions are logged and that people know how to monitor, debug, and, if needed, intervene.

6: Skills development

Skills: Before you start playing with Agentic AI make sure that your dev team have the right skills. Although the Azure AI Foundry Agent Service makes experimentation relatively easy, they’ll need skills in Python, C#, TypeScript, Java, or REST.

Prompt engineering: To work well, it is critical that that agents have clear instructions and well-defined tools. Your dev team may benefit from specific training in this area.

Getting started: Azure AI Foundry Agent Service provides a range of Agent samples and quickstarts to accelerate development. As well as an Agent Playground that allows instructions, tools, and workflows to be tested.

Planning for success

Successful use of Agentic AI requires a considered approach. While some of these are straightforward checks and practical steps, others are more demanding ‘blank sheet of paper’ exercises that involve asking questions like ‘can this be fast-tracked?’ ‘What are others doing?’ ‘Why?’ ‘Are there ready-made good, or even best, practices I can follow?’ Only then will you be setting off on the right foot and heading towards Agentic AI success.

That wraps up our three-part blog series. Feel free to read back over blogs one and two, or if you’re ready to explore Agentic AI in more depth then you can request an introductory call with one of our subject matter experts to see how Cloud Direct help you successfully benefit from the use of it.  

This is the second of a three-part series on Agentic AI. Here, we look at where it’s starting to appear, the power of multi-agent workflows, and how the Azure AI Foundry Agent Service can help. Check out part one if you haven’t already.

Agentic AI isn’t just assistive in nature. Instead, it’s able to use adaptive decision-making to complete a goal or reach a set target. Although it’s still early days with much greater sophistication to follow, we are already seeing Agentic AI offer many exciting possibilities and teams are beginning to be truly empowered in a number of ways.

Built-in capabilities

Agentic AI is starting to appear within existing systems and applications, and that shows no signs of slowing down. In IT, we’re seeing the beginnings of automated incident management (eg ServiceNow AI Agents), where an AI agent monitors infrastructure and, when it detects anomalies, takes action such as restarting services or allocating more resources. There are obvious benefits in terms of reduced mean time to repair (MTTR), fewer outages, and improved SLA compliance.

But Agentic AI is also appearing in other systems, and some of these may have departmental ownership which sit outside of IT’s direct control, but create governance issues for them to address.

Marketing automation platforms (such as Hubspot) are automating aspects of lead prospecting and qualification. By scanning both internal and external data sources, like LinkedIn, these platforms can identify high-value prospects target them with personalised communications, and prioritise them for conversion. Meanwhile in HR and recruitment, Agentic AI capabilities are being introduced to source, screen, and rank candidates.

Endless possibilities

Some of the most powerful uses of Agentic AI occur where it focuses on your business’ processes and challenges.

The Japanese IT provider Fujitsu has used Microsoft’s Azure AI Foundry Agent Service to automate the creation of sales proposals, using multiple specialist agents to interpret customer needs, access dispersed knowledge, apply reasoning, and generate a tailored proposal that’s contextually accurate and strategically aligned. It helps relatively new staff by surfacing insights and guidance and helps all sales staff by speeding up the process, with Fujitsu reporting a 67 per cent increase in productivity.

It’s where multiple agents are orchestrated to work together that the outcomes are really striking.

Microsoft has described how a financial services firm can automate customer onboarding process – from document collection, through identity verification and compliance checks, to account provisioning. Here, a Document Intake Agent receives a form or scanned document and, using File Search and Azure AI Search, separates information into its component parts. This then enables the agent to perform initial validation by checking required fields for completeness. A Review Agent compares customer data with regulation parameters and Know Your Customer norms, flagging any anomalies and, after a green light, a Setup Agent invokes provisioning tools to create customer accounts and send welcome messages.

The possibilities are almost endless, and you may already be thinking how such capabilities could be applied within your own organisation. That’s where Microsoft’s Azure AI Foundry Agent Service could help you.

Azure AI Foundry Agent Service: What you need to know

It’s been described as ‘an assembly line for intelligent agents’. The Azure AI Foundry Agent Service provides a platform and a framework for building intelligent agents that can reason and act.

  • What you get: You get the components to create AI agents to achieve specific goals and to orchestrate them together to execute complete workflows.
  • Choice: Agents are composed of a Model, Instructions, and Tools. You can choose which model you use from a growing catalogue which includes GPT-4o, GPT-4, GPT-3.5 (Azure OpenAI), Llama and others.
  • Customisable: You get that same flexibility at every stage. Defining the agent’s goals, behaviour, and constraints, the tools used, and their orchestration (via Connected Agents).
  • Availability: Azure AI Foundry Agent Service progressed from Preview to General Availability in May 2025, so it’s now fully supported for production workloads.
  • Licensing: Licensing follows Azure’s standard consumption-based pricing, with PAYG (ie runtime), provisioned throughput, and enterprise agreement pricing options.
  • Trustworthy: Microsoft has built in the security, governance and compliance features necessary to satisfy all of its enterprise clients.  

Microsoft’s Agent Service isn’t the only game in town, but it does offer an extensive capability and enormous flexibility combined with superb integration with your existing infrastructure, security and governance models. For most organisations, this makes it a strong contender.

Next steps

Before you start exploring Azure AI Foundry Agent Service, there are many ways in which Agentic AI could trip you up. In our third and final blog of this series, we look at how you can get started with six key considerations. But, if you’re keen to lean more now, then you can request an introductory call with one of our subject matter experts to find out how Cloud Direct help you successfully benefit from the use of Agentic AI. 

Data quality is fundamental, with the cost of bad data running into millions for UK businesses. Here, we look at how and why poor data is costing your organisation dearly, and the practical steps you can take to improve data quality. 

Bad data is costly. Research by analysts Gartner puts the average cost to organisations at a staggering $12.9 million a year – around £10 million. Whether that figure seems high or low to you will depend on a variety of organisation-specific factors, but the key point is that poor data is costly.  

As we become more AI-reliant, that cost is likely to increase. And for most, it’s completely hidden.   

Six ways data deficiencies could be costing your organisation

Poor data typically impacts organisations through operational waste, revenue leakage, and strategic shortcomings.   

  1. Lost revenue  
    Data inaccuracy can result in poor decisions, causing lost sales, poor targeting, and underachieving campaigns. Old or incomplete customer data can mean missed upsell or cross-sell opportunities.  
  2. Wasted time  
    Employees waste hours finding, verifying, or correcting information.  
  3. Increased risk  
    Data errors cause GDPR and regulatory breaches, possibly incurring fines and certainly requiring time-consuming corrective action. Risk models that use poor data will fail to flag preventable issues. 
  4. Negative customer experience 
    Incorrect contact details, preferences, or histories lead to irrelevant messaging or service errors, which undermine customer confidence, damage brand reputation, and add to customer churn.  
  5. Operational inefficiencies 
    Errors in billing, shipping, or inventory management cause rework, returns, and delays – all of which carry a cost.  
  6. Strategic failings 
    Decisions based on flawed insights can misallocate investment, degrade valid opportunities, ignore key risks, and cause growth strategies to fail. 

Why data matters more than ever 

As John Doublard, CTO at Oak Group and one of the industry contributors to ‘The Data & AI Readiness Playbook’, notes: “The future is data-driven.” 

“AI can only drive genuine business value when it addresses real business issues AND is fuelled by valid data,” explains Cloud Direct’s Data & AI Practice Lead Dan Knott. 

Understanding the limitations of your data, which parts need improving, and how to make those improvements is key to this.  

Eight practical steps for data quality improvement

Improving data quality isn’t just the responsibility of IT – although there are technical actions that will help – but something for the whole organisation.   

‘When you’re about to invest heavily in becoming a data-driven business, you need to make sure that the data you’re working with is going to provide for you and not cost you more’.

  1. Define what ‘good’ means 
    If you’re working with bad data, you’ll get bad outcomes, so establish clear data quality dimensions. The Data & AI Readiness Playbook outlines the importance of accuracy, completeness, consistency, validity, uniqueness and timeliness. Add to this any business-specific requirements, such as capturing an accurate address or company registration number for credit checking.  
  2. Perform a data audit 
    Identify your key data assets, such as CRM/ERP and finance data, and assess current data quality. Prioritise high-impact areas and make use of tools to help you highlight errors, duplicates, gaps, or inconsistencies.
  3. Fix existing issues  
    Involve data owners (see below) with data cleansing. The objective is to de-duplicate records, fill in missing fields, standardise formats, and identify old and obsolete data for archiving and removal. Tools like Microsoft Purview and Power Query can helpfully automate some of the work. 
  4. Embed good data governance practices 
    Appoint data owners and data stewards who will be responsible for data quality in key areas. Involve them in creating policies for data entry, usage, retention, and updates, and make use of Microsoft Purview, or similar, to enforce policies and manage access, lineage, and classification.
  5. Educate and engage users 
    Data quality needs to be seen as a shared responsibility, and not IT’s problem alone. Run awareness campaigns or training to help users understand their role in maintaining good data. Within this, celebrate success by showing how clean, reliable data is enabling better decisions and results, and continue running campaigns until data quality is embedded in the organisation’s culture. 
  6. Improve data entry at the source 
    You don’t want to be continually drawn into corrective action, so employ measures to improve data entry at the source. Add validation rules, drop-downs, and formatting controls where data is input, and ensure that departmental training covers the importance of entering accurate data. Wherever possible, automate data capture utilising integrations, forms with logic, and barcode scanners. Improving data quality requires both technical and cultural change. Start with what matters most, fix the root causes, and embed quality into everyday workflows. Over time, better data leads to faster decisions, happier customers, and lower costs.

Next Steps

Download a copy of The Data & AI Readiness Playbook to learn more. You’ll discover how others are preparing for and using AI, and a seven-step process to unlock the value of your business data.

In this first of a three-part series on Agentic AI, we take a look at what it is, where it’s being used, and why it’s different to its technological predecessors. In parts two and three, we’ll go on to look at use cases, Microsoft’s Azure AI Foundry Agent Service, and six key considerations to get started.

If you’re on slightly shaky ground with Agentic AI, then you’re in the right place. Although Agentic AI is a rapidly emerging area of artificial intelligence, it’s still not widely understood outside those specialising in autonomous systems.

What is Agentic AI?

Rather than just responding to prompts like a chatbot, Agentic AI proactively pursues objectives – with autonomy, goal-directed behaviour, and adaptive decision-making.

In psychology, ‘agency’ is the capacity to act and produce results, so we’re talking about artificial intelligence that demonstrates that ability. Agentic AI’s aim is to achieve specific goals without constant human oversight.

Why Agentic AI is different

While much of the AI you may be familiar with follows predefined rules or produces prompt-based content, Agentic AI…

  • Exhibits autonomy | This means it can initiate actions and not just react to instructions
  • Solves multi-step problems | It can deal with complex, sequential workflows
  • Learns and adapts | Like a real-life coworker, it will improve over time through feedback loops and real-world interactions
  • Coordinates tasks | This enables an agent to handle specific tasks and work in tandem with other agents towards a shared goal

And it’s the element of autonomy that makes Agentic AI so different. While assistants such as Copilot support people, Agents complete goals.

How Agentic AI works

Before we look at the underlying technology, let’s first understand the process.

Agentic AI typically follows a four-step process.

  • Perceive | First, it gathers data from sources such as sensors, databases, or user interactions
  • Reason | Then, using a large language model, it understands the task and generates strategies
  • Act | Utilising APIs, software tools, or other systems, it executes the task
  • Learn | Finally, using feedback and outcomes from previous actions, it refines its performance

From a technical perspective, each Agent has three core components:

  • A Large Language Model which powers reasoning, language understanding, and planning
  • Instructions to define the Agent’s goals, behaviour, and constraints
  • Tools which allow the agent to retrieve knowledge or take actions

In reality, this relies on a whole host of underlying technology, such as OpenAI GPT, Claude, Mistral or Gemini for the Large Language Model (LLM), the likes of LangChain, AutoGPT, MetaGPT or CrewAI for the autonomous agents framework which enables multi-step task execution and decision-making, and integration tools and APIs which will allow Agents to interact with external systems, for example Zapier, REST APIs, and browser automation.

Then there are vector databases like Pinecone or Weaviate, so Agents can retain informations across tasks and sessions, Reinforcement Learning Libraries (such as Ray RLlib and OpenAI Gym) which train Agents to make better decisions, and Orchestration Platforms such as Microsoft Azure AI Studio, HuggingFace Transformers, and IBM Watson Orchestrate to coordinate multiple Agents and workflows.

On first glance that probably sounds rather daunting, but that’s where technologies like Microsoft’s Azure AI Foundry Agent Service come in. Essentially, it’s a fully managed platform for building, deploying, and scaling Agentic AI systems, which we look at in further detail in the second blog of this series.

Where might you use Agentic AI?

In customer service, Agentic AI is already being used to handle refunds, schedule appointments, and resolve issues proactively. In finance, we’re seeing Agentic AI used to assess creditworthiness, automate mortgage or loan approvals, and manage aspects of compliance. In healthcare, it can extend accessibility providing after hours appointment booking and triaging patient queries, while the Government’s Department of Science, Innovation and Technology is exploring how Agentic AI can help people access and register for a range of public services.

With capabilities like these, it follows that you can build personal productivity agents for your knowledge workers (or yourself!) that manage calendar, emails, routine tasks, and to-do lists.

What must you be careful of?

There are also a series of potential ‘gotchas’ with Agentic AI: questions that need careful consideration.

What should or shouldn’t you use Agentic AI for? What boundaries do you want to set? Does it affect IT security? How do you avoid bias and ensure fairness? What governance measures will be required? And what of compliance?   

Most importantly of all, how do you ensure that as a result of all this work Agentic AI delivers genuine value to your organisation? We’ll look at this in detail in our third and final blog of the series when we set out six key considerations to getting started with Agentic AI.

Next steps

Be sure to follow up this blog with the following two parts, but if you’re keen to learn more directly from an expert, then we’re here to help. Request a call with a member of our team today and find out how Cloud Direct can help you successfully benefit from the use of Agentic AI.

Microsoft Azure offers an incredibly rich ecosystem for building, deploying, and managing applications. However, without a strategic approach to resource management, inefficiencies that lead to bill shock or under-utilisation can creep in. Maximising your Azure investment means actively identifying and resolving these inefficiencies. 

Here are five key steps to help you spot inefficiencies and drive continuous optimisation within your Azure environment.

Step 1: Gain comprehensive visibility with Azure monitoring and cost management 

You can’t optimise what you can’t see or understand. The foundational step to tackling Azure inefficiencies is establishing a clear, holistic view of your entire Azure footprint, encompassing performance, health, and cost allocation. 

Action: Leverage Azure Monitor to collect and analyse metrics and logs from all your Azure resources (VMs, App Services, databases). Utilise Log Analytics Workspaces for centralised log collection and analysis and, for application-level insights, deploy Application Insights. Critically, use Azure Cost Management + Billing for detailed cost analysis, understanding spending patterns by resource group, tag, and service. 

Optimisation Focus: Proactive identification of idle resources, performance bottlenecks indicated by high latency or low throughput, and unexpected cost spikes. Azure Cost Management + Billing will highlight where your money is going and identify anomalous spending. 

Step 2: Right-size and rationalise your Azure resources 

Over-provisioning is a primary driver of unnecessary costs in Azure. Many resources are initially deployed with more capacity than required, leading to consistent under-utilisation. 

Action: Regularly review the “Cost” and “Performance” recommendations within Azure Advisor, which provides personalised, actionable advice to right-size VMs, Azure SQL Databases, and other resources based on actual usage patterns. Downsize VM SKUs, adjust Azure SQL Database service tiers (from General Purpose to Basic/Standard if appropriate, for example), and utilise Blob Storage tiers (Hot, Cool, Archive) to match data access frequency. Identify and decommission unused resources like orphaned disks, unattached public IP addresses, and idle ExpressRoute circuits, and use Azure Resource Graph queries to find “zombie” resources. 

Optimisation Focus: Directly reduce infrastructure costs by matching resource allocation precisely to demand, eliminating waste from over-provisioning and idle assets. 

Step 3: Implement strong Azure governance and cost policies 

Without robust governance, “Azure sprawl” can quickly lead to an uncontrolled explosion in costs. Establishing clear policies and processes for resource provisioning, tagging, and budget management is critical to prevent inefficiencies before they take hold. 

Action: Define a comprehensive Azure Tagging strategy (for cost centres, environments, owners) and enforce it using Azure Policy to ensure resources are consistently tagged for granular cost reporting in Azure Cost Management. Set up budgets and spending alerts in Azure Cost Management + Billing at the subscription or resource group level. Implementing Azure Policy will also enable you to enforce compliance such as restricting regions, disallowing specific resource types, or automatically shutting down VMs in Dev/Test subscriptions after hours, while Azure DevTest Labs can be used for development environments, which offers built-in auto-shutdown features. 

Optimisation Focus: Gaining control over spending, improving accountability, preventing shadow IT, and ensuring that Azure resources are provisioned and used according to organisational guidelines. 

Step 4: Leverage Azure automation and Infrastructure-as-Code for efficiency 

Manual processes in Azure are not only time-consuming but also prone to errors and inconsistency, which only hinder efficiency. Automation is key to streamlining operations and ensuring consistent, cost-effective deployments. 

Action: Automate routine tasks using Azure Automation Runbooks (PowerShell, Python) for things like scheduled VM shutdowns, patch management, and backup operations. Implement Infrastructure-as-Code (IaC) using ARM Templates or Bicep to define and deploy your Azure infrastructure in a consistent, repeatable, and version-controlled manner. Use Azure DevOps or GitHub Actions for CI/CD pipelines to automate deployments. For dynamic workloads, configure Azure Auto-scaling for Virtual Machine Scale Sets, App Services, and Azure Kubernetes Service (AKS). 

Optimisation Focus: Reducing operational overhead, minimising human error, ensuring consistent and optimised configurations, and enabling your Azure infrastructure to dynamically adapt to demand, thereby using resources more efficiently. 

Step 5: Foster a culture of continuous Azure optimisation 

Cloud optimisation in Azure isn’t a one-time project; it’s an ongoing commitment. The dynamic nature of the cloud and evolving business needs require a continuous loop of review, refinement, and improvement. 

Action: Embrace FinOps principles, fostering collaboration between finance, operations, and development teams to drive cost accountability and efficiency. Schedule regular reviews of Azure spend and performance metrics using Azure Cost Management dashboards. Continuously monitor Azure Advisor for new recommendations. Stay informed about new Azure services, features, and pricing models (such as Azure Reservations for significant savings on consistent workloads, Azure Hybrid Benefit for existing Windows Server/SQL Server licences). Actively engage with Azure’s Well-Architected Framework, particularly its Cost Optimisation pillar. 

Optimisation Focus: Embedding cost awareness and efficiency into your organisational culture, ensuring that optimisation becomes a routine part of your Azure operations, leading to sustained cost savings and improved performance over time. 

The Result

By systematically taking these five actions, you can effectively identify and eliminate inefficiencies and create a more cost-effective, performant, and resilient Azure environment. This proactive approach not only saves money, but also frees up resources to drive innovation and support your strategic goals on the Microsoft Azure platform. 

If you want to learn more about driving optimising inefficiencies in your Azure environment, then an Innovation Workshop will provide a platform for us to collaboratively examine your business context, and identify opportunities for maximising the return from your Azure spend.