How we help

What we do

Explore

From data to decisions: How AI is reshaping financial services

6 min read
Share:

You’ll have heard it said before that ‘there are two types of businesses: those that get ahead and those that get left behind.’ It’s a prediction that’s starting to look increasingly accurate, particularly when it comes to Financial Services. Here we consider how and where AI is being used in the industry, its broader cultural implications, and what you should do to prepare.

“I think there are three main drivers in our industry right now: one is scale, one is complexity, and one is the risk of becoming irrelevant,” explains Kim Sgarlata, CEO of fiduciary services provider Oak Group.

AI has the potential to assist financial services firms with all three of those challenges.

When we talk about AI in financial services, we’re referring to two overlapping revolutions:

  • The increasingly established use of data-driven AI, such as data analytics, machine learning, and task automation, along with
  • The emerging use of ‘agentic AI’, to autonomously and intelligently execute workflows.

However, don’t think of AI as just a tool for productivity. It is increasingly central to competitiveness, compliance, and customer trust, within a highly regulated and reputation-sensitive environment. To understand why, and how, let’s take a deeper dive into these two areas.

Data-driven AI

Good data is fundamental to the success of data-driven AI initiatives, and for many its existence is far from a given.

“While storing data has become easier, ensuring it’s clean, governed, and consistent across the organisation is the real challenge today,” explained Jonathan Ball in his role as a Data Architect at 7IM.

But, underpinned by reliable data, AI can deliver improvements in:

  • Operational resilience and efficiency
    Using AI to pre-emptively identify and fix IT vulnerabilities, before service outages. For real-time anomaly detection and predictive analytics for financial crime prevention, and for automating manual processes.
  • Customer insight and experience
    Using a customer’s profile, goals, and risk appetite to provide personalised financial advice, or sentiment and behavioural analytics to understand customer needs and risk behaviours from transaction and communication data.
  • Decision intelligence and risk modelling
    Identification of emerging trends, liquidity risks, or portfolio optimisation opportunities. For improved credit risk modelling and underwriting, and automation of regulatory compliance, such as ESG reporting, or early detection of conduct breaches.
  • Innovation and product development
    Personalising insurance and investment products to an individual customer’s needs or drawing new insights from customer data to create new services or partnerships.

The role of Agentic AI

Rather than merely generating an output, Agentic AI can plan, reason, and act autonomously to achieve a goal – theoretically without human oversight. However, Financial Services’ governance requirements mean many are cautious about adoption. So, it’s likely that we’ll see a form of ‘bounded’ autonomy in which systems act within well-defined compliance and ethical frameworks.

Industry watchers are already talking of:

  • Compliance monitoring, with AI that identifies potential issues and initiates remediation
  • Autonomous trading agents, that operate within tightly governed limits
  • Customer service agents that manage whole case journeys (eg from claim to resolution), albeit with human oversight
  • Portfolio management copilots that propose and rebalance strategies, in line with ethical and regulatory constraints.

Over time, it’s likely that many more use cases will emerge.

Cultural considerations

Perhaps more than anywhere else, cultural and ethical considerations are as important as the technology in financial services. Trust, regulation and human judgement are integral to how the sector operates.

Here are five key areas for consideration.

  • Trust and transparency
    Your organisation’s credibility will be eroded if either customers or regulators mistrust your use of AI. The technology must be understood, and teams must be able to explain why a model makes a decision, such as a loan approval or a risk flag.
  • Human accountability
    AI shouldn’t be seen as an ultimate decision maker, but rather as something that can make decisions that humans remain accountable for – just like a well-trained, more junior member of staff. This means having a human oversight mindset, clear ownership, and encouraging people to challenge AI output.
  • Data ethics and fairness
    You’ll want your use of AI to reflect your organisation’s stance on bias, discrimination, and data misuse. Certainly, those involved in developing your AI uses must be acutely aware that the end product will reflect any biases in its training data – but you should go further than this. Embed data ethics reviews into model design and discuss what’s fair, rather than merely meeting legal requirements.
  • Involve subject matter experts
    AI success is heavily dependent on combining technology expertise with the knowledge of business and regulatory specialists. Bring data scientists, compliance officers, product managers, and customer advocates together in cross-functional teams. Encourage mutual learning and a shared language so these don’t degenerate into us and them, ‘tech versus business’, silos.
  • The human touch
    Consideration of AI use can easily become very process oriented, but it can also trigger fear and uncertainty for staff. This needs to be sympathetically addressed. It’s also important to recognise that AI isn’t static and that culture needs to evolve with it. This may include ongoing training on responsible AI, bias awareness, and data handling,  as well as changing your organisation’s measures of staff performance to promote explainability and critical thinking around new AI capabilities.    

“The cost of AI is an investment, but one that pays off in saved time, better decision-making, and automation of repetitive tasks,” notes Matthew Ebo, Assistant Strategic Insights Manager, at Lloyds Banking Group.

Moving forwards

If you don’t yet feel fully ready to take advantage of AI, you’re not alone. EY’s ‘State of Financial Services AI transformation 2025’ survey revealed that only 9 per cent feel they are ahead of their peers in AI adoption. Yet, in the same survey, 61 per cent of UK and European executives said they expect AI to have ‘significant impact’.

So, what should you do if you want to improve your organisation’s AI preparedness?

Register to attend ‘Shards of Innovation: AI and Data-Driven Changes for Financial Services’. At this punchy, half-day briefing you’ll hear:

  • Microsoft’s Director of Technology Strategy outline how AI will rewire your business
  • Industry panellists discuss how financial services can balance rapid AI adoption with trust, compliance and transparency
  • Tom Glossop, 7IM’s Head of Developments, share its adoption experiences
  • Cloud Direct’s Dan Knott outline practical steps to accelerate your journey.   

Visit the event page to register your interest, or use the form below to request an introductory call with one of our subject matter experts, and find out how Cloud Direct can help you successfully benefit from AI. 

Talk to our experts

Talk to our experts

Get a call back from one of our team to talk about your business.

This field is for validation purposes and should be left unchanged.

Want to learn more?

Register your interest in attending our next FSI event, hosted at The Shard.

Register

Download our Data & AI Playbook

Hear more from the contributors to this blog in our downloadable guide.

Download now

Read more like this