AI Enablement in Financial Services: A Sector Playbook for FTSE 100 Banks, Insurers, and Asset Managers
There is a quiet asymmetry in financial services right now. Most large banks, insurers, and asset managers have been doing "AI" for years — fraud models, credit scorecards, robotic process automation, customer chatbots, GenAI pilots. Some of it works. Some of it has produced real efficiency. None of it has materially changed the way these institutions actually operate.
Meanwhile, a smaller group of FS firms — and a much larger group of fintech and insurtech challengers — are doing something structurally different. They are not adding AI to their existing operations. They are rebuilding their operating models around AI from the ground up. And the gap between the two groups is starting to widen in ways that will be very hard to close once they become visible in the macro numbers.
This is a sector playbook for the second group. Or, more accurately, for the leaders in the first group who have realised that the second posture is the one they need.
It is opinionated. It is written for COOs, Chief Transformation Officers, and Chief Risk Officers in regulated FS environments — the people who have to make this work under PRA SS1/23, FCA SYSC, the EU AI Act, DORA, and a customer base that is one bad headline away from a regulatory complaint. It assumes you have already read What AI Enablement Actually Means and want to know what to actually do about it inside a regulated FS firm.
Why financial services is structurally suited to AI enablement (and why it has not yet happened)
Financial services should be the easiest sector for AI enablement. The work is data-rich. The decisions are high-frequency. The cost of error is high enough to justify investment. The processes are well-defined. The volumes are large. The customer base is captive. There is no obvious technological constraint.
And yet, of the COOs we work with in FTSE 100 banks and large insurers, almost all of them describe the same picture: lots of AI activity, very little structural change. Cost-to-income ratios that are flat or improving slowly. Operations headcount that has not meaningfully changed. Workflow throughput that is incrementally faster but qualitatively the same.
The reason is not that the technology doesn't work. It is that financial services has structural features that make AI enablement specifically hard:
Legacy systems with deep dependencies. A typical Tier 1 bank runs forty to a hundred core systems, many of them decades old, with brittle integrations and undocumented data semantics. Redesigning a workflow around AI usually means renegotiating with the system that owns the upstream data, which usually means a multi-year roadmap inside another team's ownership.
Fragmented data ownership. Customer data is owned by three different teams. Risk data is owned by another two. Reference data is owned by yet another. None of them were designed to feed automated decision-making in real time, and the political work of bringing them into a common action layer is enormous.
Regulated decisions everywhere. The decisions you most want to automate — credit, pricing, fraud, AML, customer suitability — are the ones with the heaviest regulatory expectations. Every redesign has to clear a governance hurdle that was designed for an earlier era of model risk.
Risk-averse culture. Most FS firms have decades of muscle memory around "shipping carefully." That muscle memory protects them from operational disasters but it also kills the velocity that AI enablement requires.
Internal political complexity. A redesign of a real workflow touches operations, risk, compliance, technology, finance, HR, and (sometimes) the regulator's view of senior accountability. Coordinating that is the actual work — and it's organisational work, not technical work.
These are all addressable. None of them are reasons to wait. They are reasons that the firms which figure this out earlier will pull ahead, because the institutional muscle takes years to build.
The five priority value streams for AI enablement in financial services
When we start an AI enablement engagement with a financial services client, we ask the same question early: if you could redesign one priority value stream from the ground up, which one would have the highest combined impact on cost, risk, and customer outcomes? The answer varies by firm, but it almost always lands on one of five places.
1. Regulatory reporting and disclosure
Why it matters. Regulatory reporting consumes 5–15% of total operations cost in most large banks, runs on a quarterly sprint cycle that exhausts the team, and produces submissions that are increasingly being scrutinised for narrative quality rather than just numerical accuracy. Most firms run it through a mix of warehouse pipelines, hand-curated spreadsheets, and last-minute SME review.
Why it's an AI enablement opportunity. The cognitive backbone of regulatory reporting is small — data validity, anomaly detection, narrative drafting, materiality judgement, attestation — and most of that is now automatable. The blocker is not the model. The blocker is the data layer, the decision rights around materiality and attestation, and the governance evidence the regulator wants to see.
What the redesigned version looks like. A normalised reporting layer fed continuously from source systems. Anomaly detection running constantly, surfacing exceptions before the close. Narrative drafts generated automatically from the underlying data, with SMEs reviewing the draft and the underlying anomalies. Materiality and attestation stay human, with structured review interfaces. Cycle time goes from weeks to days. Team size shrinks but each role becomes more specialised.
Regulatory frame. PRA SS1/23 (model risk for the anomaly model), BCBS 239 (data quality and lineage), FCA SYSC (adequate systems), and the relevant prudential reporting rules. Done well, the redesign actually improves your supervisory posture — because the data lineage and decision logs are observable in production.
2. KYC, AML, and customer due diligence
Why it matters. KYC and AML consume a substantial fraction of operations budget, with industry-wide false positive rates above 90%. Most analysts spend their time on cases that turn out to be fine. Customers get frustrated with onboarding friction. The compliance team is permanently understaffed.
Why it's an AI enablement opportunity. This is one of the value streams where the gap between augmentation and redesign is most stark. Most banks have added AI tools to KYC: document classification, sanction screening, alert prioritisation. None of these change the workflow shape. The redesigned version handles routine cases end-to-end and concentrates analyst attention on the genuinely ambiguous ones.
What the redesigned version looks like. Documents ingested as they arrive. Identity verification and risk signals enriched in real time from a normalised customer wide-row. Triage decisions generated by the system with explicit confidence thresholds. Routine cases handled end-to-end with full audit trails. Analyst attention concentrated on novel patterns, high-risk profiles, and cases where the system is uncertain. The team is smaller but each role is more specialised, more consequential, and harder to do.
Regulatory frame. EU AI Act (high-risk classification for biometric identity), PRA SS1/23 (model risk for the triage and risk-scoring models), FCA SYSC and the FCA's AML expectations, DORA for any third-party identity vendors. The audit trail requirements are demanding but the redesign actually produces more defensible evidence than the legacy workflow, because every decision is logged with full lineage.
3. Credit decisioning and underwriting
Why it matters. Credit decisioning is the canonical FS AI use case. Almost every bank has a credit model. Most of them are decades old in their architecture, scored and managed in isolation from the underlying customer relationship, and updated on long cycles. The friction between credit decisioning and the rest of the customer journey is substantial.
Why it's an AI enablement opportunity. The model is rarely the bottleneck. The bottleneck is the decision rights between the model and the human (relationship manager, branch staff, automated workflow), the data layer that feeds the model (customer history, transaction data, external bureau data), and the governance machinery around model risk.
What the redesigned version looks like. Customer wide-row with all decision-relevant context — internal history, transaction signals, bureau data, fraud flags — refreshed continuously. Credit decisions generated by default, with the system handling the routine 70–80% end-to-end. Relationship manager interventions concentrated on borderline cases and customer-facing situations where judgement matters. Override loops feed structured signal back into the model. Decision logs are queryable on demand for any specific approval or denial.
Regulatory frame. EU AI Act (high-risk for credit scoring and creditworthiness assessment — explicitly named), PRA SS1/23 (model risk core territory), FCA Consumer Duty (good outcomes for retail customers), and the relevant fairness/discrimination rules in your jurisdiction. The Consumer Duty dimension is critical — your governance has to demonstrate not just that the model works but that it produces good outcomes for vulnerable customers.
4. Fraud and transaction monitoring
Why it matters. Fraud and transaction monitoring already use ML extensively — most large banks have had fraud models in production for years. The gap is not the model; it is the operating model around the model. Investigators are still drowning in alerts, the override and feedback loops are usually broken, and the gap between the model's confidence and the workflow's response is often poorly designed.
Why it's an AI enablement opportunity. This is the value stream where the data flywheel is most achievable, because labelled outcomes (fraud / not fraud) come back relatively quickly. A working flywheel here can produce fraud detection performance that is structurally better than any off-the-shelf vendor — because it has been trained on your specific customer base, transaction patterns, and fraud typologies.
What the redesigned version looks like. Continuous monitoring of all transactions. Decisions generated by default, with the system blocking, holding, or releasing based on confidence. Investigators handle exceptions and complex cases. Investigator decisions flow back as labelled training data, structured and curated. The model improves continuously from operator feedback. Investigator team is smaller but more specialised.
Regulatory frame. PRA SS1/23 (model risk), FCA SYSC, DORA for any cloud-based ML platform. The override rate and feedback loop are not just operational metrics — they are evidence of effective governance.
5. Customer operations and complaint handling
Why it matters. Customer operations has been the entry point for GenAI in most banks because the use cases are visible, the volumes are high, and the cost-per-contact is well understood. Almost every bank has piloted a chatbot, a voice agent, or an agent assist tool. Most of these pilots produce real efficiency gains and very little structural change.
Why it's an AI enablement opportunity. Under FCA Consumer Duty, customer outcomes are now a first-class regulatory concern. AI-driven customer ops that produce poor outcomes for vulnerable customers will be scrutinised. The redesigned version is not "more chatbot." It is a customer ops workflow where the system handles routine cases end-to-end, agents focus on complex relationships and exception handling, complaint patterns are detected automatically and feed into product and process improvement, and the entire workflow generates evidence the firm is delivering good outcomes.
What the redesigned version looks like. Inbound contacts are categorised, enriched with customer context, and resolved by the system where possible. Agents handle exceptions, vulnerable customer situations, and complex cases. Complaint signals flow into a triage system that surfaces emerging patterns for product and ops teams. Consumer Duty outcome monitoring is continuous, not quarterly. Vulnerable customer protection is built into the workflow, not a separate review.
Regulatory frame. FCA Consumer Duty (core), FCA SYSC, DORA for any third-party AI platforms, EU AI Act for any biometric voice authentication. Consumer Duty is the most consequential frame here — and the one most easily underestimated by firms that think of AI as a cost-reduction tool.
What the regulatory landscape actually requires
A condensed view of the four regulatory regimes that shape AI enablement work in UK and EU financial services. (We cover these in depth in our AI Governance & Model Risk training course.)
EU AI Act. Risk-tiered. Most material FS use cases land in "high-risk" — credit scoring, biometric identification, employment decisions, AI used in essential services. High-risk obligations include risk management across the model lifecycle, training data quality, technical documentation, logging, transparency, human oversight, accuracy/robustness, conformity assessment, and post-market monitoring.
PRA SS1/23 (UK). Model Risk Management. Effective May 2024. Technology-agnostic but explicitly addresses AI/ML. Five core principles: model identification and risk classification, governance, model development and use, independent validation, and model risk mitigants. The most directly applicable framework for UK-regulated firms.
FCA SYSC and Consumer Duty. Principles-based UK posture. SYSC requires adequate systems and controls. Consumer Duty (in force July 2023) requires firms to deliver good outcomes for retail customers. AI work in customer-facing contexts has to demonstrate Consumer Duty alignment, not just SYSC compliance.
DORA. EU Digital Operational Resilience Act, applicable from January 2025. Captures AI vendors as critical ICT third parties, imposes contract terms, exit strategies, supervisory access, incident reporting, and operational resilience testing.
The pattern across all four: regulators are converging on the expectation that AI in regulated FS has named accountability, observable lineage, measurable controls, documented decision rights, and demonstrable customer outcome monitoring. None of this is about preventing AI deployment. It is about ensuring deployment is defensible. Done well, governance accelerates rather than blocks the work.
A 36-month transformation arc for a Tier 1 financial services firm
The realistic timeline for serious AI enablement in a large FS environment is 24–36 months. There are no shortcuts that survive enterprise prioritisation cycles. Here is the arc that consistently works in our engagements.
Phase 1 — Diagnostic and one priority workflow (Months 0–6)
Build the use case inventory (including shadow AI and vendor-embedded AI). Triage against EU AI Act and SS1/23. Pick one priority workflow that matters to the business — usually regulatory reporting or KYC, because the ROI is large and the data layer is at least partially understood. Redesign that workflow end-to-end, including its data layer slice, decision rights, and governance. By end of month 6, that workflow is live with measurable outcomes and full evidence.
The political work in this phase is enormous and the technical work is moderate. Most of what determines success is whether the executive sponsor holds political space through the 6 months when no visible outcome has yet landed.
Phase 2 — Scale to adjacent value streams (Months 6–18)
Apply the playbook to 2–3 additional workflows in adjacent domains, reusing as much of the data layer and governance machinery as possible. Build the second-line embedded function in earnest. Stand up the central AI governance committee on actual substance. Begin the talent shift — hiring workflow designers and system supervisors, retraining operators into exception handlers.
By end of month 18, the firm has 3–5 workflows operating AI-native, the data layer is starting to look like a platform, and the operating model changes are starting to become permanent (new metrics, new roles, new career paths).
Phase 3 — Embed and compound (Months 18–36)
AI enablement stops being a programme and starts being how the firm operates by default. New workflows are designed AI-native from the start. The data flywheel begins to compound. Multiple model risk committees are operating. Audit has the competence to perform substantive AI audits. The talent model is structurally different from where you started.
The firm now has compounding advantage that competitors will struggle to close. The cost to a Tier 1 bank of catching up after waiting another 24 months will be substantially higher than the cost of starting now.
Common traps in financial services specifically
A few failure modes we see consistently in FS-specific engagements:
Pilot proliferation. The bank has 30+ AI pilots across different teams, none of them coordinated, most of them stuck. The fix is not to launch more pilots. The fix is the compounding audit — kill the ones that won't compound, scale the ones that will, and rebuild the rest under the new operating model.
The "data foundation programme that runs forever" trap. A multi-year data warehouse modernisation programme that has no business outcome attached. It will be cut at the next budget cycle. Anchor the data work to a real workflow.
Governance theatre. Big committees, lots of documentation, no operational substance. Audit will see through this. So will the regulator. The fix is embedded governance — second-line specialists in the delivery teams, decision logs as a by-product of build, evidence available on demand.
"We'll do it after the next regulatory change." This is how decades pass without structural change. There is always another regulatory change. Start now; design for resilience to the next change.
Confusing AI strategy with AI tools list. A 47-slide deck of vendor logos is not an AI strategy. The strategy is the answer to: which workflows do we redesign first, what data layer do they need, how does the operating model change, and how do we govern the result. If your "AI strategy" doesn't answer those four questions, it isn't one.
The startup angle — and why it matters even for incumbents
The hardest thing for FS incumbent leaders to hear is that AI enablement is structurally easier for startups than for them. AI-native fintechs and insurtechs can design workflows, data layers, decision rights, and operating models from scratch. They don't pay the organisational tax of unwinding decades of optimisation around a different operating model. Their cost-to-income ratios, their cycle times, and their data flywheels will compound faster than incumbents can match.
The implication is not that incumbents should give up. The implication is that incumbents should start the work earlier to compensate for the structural disadvantage. Every quarter spent in augmentation mode is a quarter the gap widens. Every quarter spent in serious enablement work is a quarter the gap closes.
For most Tier 1 firms, the realistic frame is: we cannot move as fast as a fintech, but we can be substantially ahead of our incumbent peers, and that is the competitive comparison that actually matters. The institutions which start now will have a 24-month head start over their peers in two years. That head start compounds.
Where to start
The fastest way to find out where you actually are on this curve is the AI Enablement Maturity Diagnostic — a 25-question self-assessment across the five pillars of enablement that produces a maturity score, a per-pillar breakdown, and a prioritised set of next steps. Most FS leaders are surprised by their own results: not because they're worse than they thought, but because the gap between the augmentation work they've already done and the redesign work they haven't is sharper than they expected.
If the diagnostic confirms what you suspect, our AI Enablement service is built around exactly this work — five pillars, one priority workflow, embedded governance, sequenced over 12–24 months. It is not a tools roadmap. It is the operating-model redesign the rest of the sector is going to wish they'd started earlier.
We have also published sector-specific deep dives that go further on the value streams, regulatory frame, and operating model patterns relevant to your part of the market:
- AI Enablement for Banking — KYC/AML, credit decisioning, fraud, regulatory reporting, customer ops under PRA SS1/23, FCA Consumer Duty, and the EU AI Act
- AI Enablement for Insurance — Underwriting, claims, distribution, regulatory reporting, fraud under Solvency II, IFRS 17, and the EU AI Act
- AI Enablement for Asset Management — NAV production, middle office, distribution, regulatory reporting, ESG operations under AIFMD, UCITS, MiFID II, and SFDR
For supporting depth on the structural argument, see the pillar essay on What AI Enablement Actually Means, the deep dive on the data layer constraint, and the practical workflow redesign playbook.
The institutions that will lead financial services in 2030 are the ones rebuilding their operating models around AI in 2026. The work is hard, the timeline is long, and the political muscle takes years to build — which is exactly why it is the best time to start.
Ready to do the structural work?
Our AI Enablement engagements are built around the five pillars in this article. We start with a focused diagnostic, then redesign one priority workflow end-to-end as proof — including the data layer, decision rights, and governance machinery.
Explore the AI Enablement serviceReady to do the structural work?
Our AI Enablement engagements are built around the five pillars in this article. We start with a focused diagnostic, then redesign one priority workflow end-to-end as proof — including the data layer, decision rights, and governance machinery.
Explore the AI Enablement serviceRelated insights
From Augmentation to Redesign: A Playbook for AI-Native Workflows
Most enterprises are 'adding AI' to workflows that were designed for humans. The compounding gains come from rebuilding the workflow itself. Here's how.
April 06, 2026The Data Layer Is the Constraint That Determines Everything in Enterprise AI
Most enterprise AI initiatives fail at the data layer, not the model. Here's why data captured for reporting can't be acted on by AI — and what to redesign so it can.
April 06, 2026Decision Rights in an AI-Native Operating Model
When systems generate output by default, who decides what — and who is accountable when the decision is wrong? A practical framework for redrawing decision rights in regulated environments.
April 06, 2026