Module 2 of 729% through
Module 2

The Regulatory Landscape — EU AI Act, PRA SS1/23, FCA, DORA

What the major financial-services AI regulations actually require, where they overlap, and which parts shape day-to-day governance decisions.

Module 2 — 90-second video overview

Why this module covers four regulations

If you operate in UK or EU financial services, four regulatory regimes shape what you can and cannot do with AI: the EU AI Act, PRA SS1/23, FCA SYSC + Consumer Duty, and DORA. There are others — GDPR, BCBS 239, the SEC's expectations in the US, sector-specific rules — but these four are the ones that consistently come up in AI governance design.

This module is not a comprehensive legal summary. It is a working operator's view: what each regulation actually requires you to do, where they overlap, and which parts will most shape your day-to-day decisions. For the legal detail, you will still need your compliance team and external counsel. But you should leave this module able to walk a regulator through the core requirements without notes.

EU AI Act — the risk-tiered approach

The EU AI Act is the world's first comprehensive AI regulation, finalised in 2024 with phased implementation through 2026 and beyond. Its core mechanism is a risk tiering system that classifies AI systems into four buckets and applies different obligations to each.

Unacceptable risk. AI systems prohibited outright — social scoring by governments, certain biometric surveillance, manipulative or exploitative AI. Not relevant for most financial services use cases.

High risk. This is where most material financial services AI use cases land. Specifically called out: credit scoring, creditworthiness assessment, biometric identification, employment-related AI (hiring, performance evaluation), and AI used in essential services. The obligations are substantial:

  • Risk management system across the model lifecycle
  • High-quality training, validation, and testing data with documentation
  • Detailed technical documentation
  • Logging and traceability of operations
  • Transparency and information to users
  • Human oversight measures
  • Accuracy, robustness, and cybersecurity requirements
  • Conformity assessment before market placement
  • Post-market monitoring and incident reporting

Limited risk. AI systems with specific transparency obligations — chatbots must disclose that users are interacting with AI; deepfakes must be labelled. Lighter obligations than high-risk.

Minimal risk. Most AI applications. No specific obligations beyond general legal requirements.

Practical implication for FS: if your use case touches credit, employment, or biometric ID, assume high-risk and design accordingly. Many other use cases (fraud detection, customer service automation, document processing) sit in limited or minimal risk but should still be governed proactively.

PRA SS1/23 — model risk management for everything, including AI

SS1/23 (Model Risk Management, PRA Supervisory Statement 1/23) became effective in May 2024 in the UK and is the most directly relevant model risk regulation for UK-regulated firms. Unlike the EU AI Act, it is not specifically about AI — it covers all models — but it is technology-agnostic and explicitly addresses AI/ML.

Five core principles:

  1. Model identification and risk classification. Maintain a model inventory; classify by materiality.
  2. Governance. Clear ownership, accountability, and second-line oversight.
  3. Model development, implementation, and use. Robust development lifecycle, validation before use, ongoing monitoring.
  4. Independent model validation. Validation by people independent of development.
  5. Model risk mitigants. Documented controls for known limitations.

What makes SS1/23 distinctive for AI:

  • Lifecycle expectations. Models must be governed across their full lifecycle, not just at approval. This is hard for AI/ML models that retrain and drift.
  • Validation expectations. Independent validation must cover the data, the model, and the use. For AI, this means validating the training data, the model's performance, and the way it is being used in the workflow.
  • Use of vendor models. Models from third parties (which describes many AI tools) require additional governance because the firm cannot directly inspect their internals.
  • AI/ML specifically called out. SS1/23 acknowledges that AI/ML models present heightened risks and require particular attention.

If you are UK-regulated, SS1/23 is the framework that will most directly shape your model governance. The PRA's expectations are not a checklist to satisfy — they are a baseline below which you should not operate.

FCA SYSC and Consumer Duty — the principles-based UK posture

The FCA has deliberately not produced a standalone AI rulebook. Instead, it has chosen to apply existing principles-based frameworks to AI, with sector-specific guidance and discussion papers. The two frameworks that matter most:

SYSC (Senior Management Arrangements, Systems and Controls). Requires firms to have adequate systems and controls for the work they do. Applied to AI, this means:

  • Adequate governance over AI use cases
  • Adequate controls on data feeding AI
  • Adequate human oversight of automated decisions
  • Adequate record-keeping for accountability

Consumer Duty (in force July 2023). Requires firms to deliver good outcomes for retail customers across four outcomes: products and services, price and value, consumer understanding, and consumer support. Applied to AI:

  • AI-driven decisions must produce good outcomes for customers
  • Customers must be able to understand how AI is being used in decisions affecting them
  • AI must not produce systematic harm to vulnerable customers
  • Firms must monitor outcomes and intervene when AI produces poor results

The principles-based approach is a double-edged sword. There is no detailed checklist to satisfy, which gives firms flexibility — but it also means that supervisory judgement will determine whether a specific implementation is acceptable, and that judgement is harder to anticipate.

Practical implication: in the UK, you need to be able to make a clear, evidenced argument that your AI governance meets the spirit of SYSC and Consumer Duty. Generic compliance won't survive a focused supervisory visit.

DORA — operational resilience for AI as ICT

DORA (Digital Operational Resilience Act) applies to EU financial institutions and their critical ICT third-party providers from January 2025. It is not an AI regulation per se, but it captures many AI deployments because they sit on ICT infrastructure and often depend on third-party providers.

Five core areas:

  1. ICT risk management framework
  2. ICT-related incident reporting
  3. Digital operational resilience testing (including threat-led penetration testing for major firms)
  4. ICT third-party risk management
  5. Information sharing arrangements

For AI specifically, DORA matters in three ways:

  • Third-party risk. AI vendors (cloud-based model providers, fine-tuning platforms, AI-as-a-service vendors) are likely "critical ICT third parties" subject to DORA's requirements. This includes contractual requirements, exit strategies, and supervisory access.
  • Incident reporting. Material AI incidents (model failures, data breaches, automated decisions causing customer harm) may be reportable under DORA's incident reporting regime.
  • Resilience testing. AI workflows should be included in operational resilience testing — what happens if the model goes down? What's the fallback?

Where they overlap and where they don't

The four regulations form a partially overlapping mesh:

ConcernEU AI ActSS1/23FCA SYSC/CDDORA
Use case classificationYes (risk tier)Yes (materiality)ImplicitNo
Model lifecyclePartialYes (core)ImplicitNo
Independent validationYes (high-risk)Yes (core)ImplicitNo
Customer outcomesLimitedNoYes (core)No
Third-party riskPartialYesYesYes (core)
Incident reportingYesYesYesYes (core)
Operational resilienceLimitedLimitedYesYes (core)

What this table tells you: there is no single framework that covers everything. A working AI governance programme has to satisfy all four (and others, including GDPR), and the design choice is whether you build a single integrated framework that satisfies all of them or a patchwork of separate compliance streams. We strongly recommend the integrated approach — patchwork compliance is the most expensive way to do this.

What's next

In Module 3 we'll cover how to map your specific AI use cases to the risk tiers and obligations defined by these regulations — the practical triage exercise that turns the regulatory landscape into a working portfolio view.

Module Quiz

5 questions — Pass mark: 60%

Q1.What is the EU AI Act's core regulatory mechanism?

Q2.What does PRA SS1/23 cover?

Q3.What is the relationship between SS1/23 and AI?

Q4.What is DORA?

Q5.Which framework is the FCA's expectation on AI primarily expressed through?