Three Lines of Defence for AI: How to Design Governance That Works in Regulated Industries
The three-lines-of-defence model has been the standard governance architecture in financial services for more than a decade. First-line business functions own and manage risk. Second-line risk and compliance functions provide oversight, challenge, and frameworks. Third-line audit provides independent assurance. The model works because it creates clear accountability and prevents the people who take risk from being the same people who assess it.
When it comes to AI, most regulated firms have tried to apply the three-lines-of-defence model to their AI deployments and found that it does not work out of the box. The model was designed for human decision-making in established processes. AI introduces a new category of risk (model risk, data risk, bias risk, drift risk) that cuts across all three lines and does not fit neatly into the existing committee structure.
This post is about how to redesign the three-lines-of-defence model so it works for AI, based on our experience across banking, insurance, asset management, healthcare, and energy.
Why the standard model breaks for AI
The three-lines-of-defence model breaks for AI in three specific ways:
The first line does not understand what it owns
In a traditional process, the first-line business function understands what it does, how it does it, and what can go wrong. The operations team lead can explain the reconciliation process step by step. When AI is introduced, the first-line owner often does not understand the model well enough to own the risk. They know the model produces outputs, but they cannot explain how it works, what its limitations are, or what would cause it to fail.
This is not a training problem. It is a role design problem. The first line needs new roles (system supervisors, exception handlers, feedback curators) that are designed to operate alongside AI rather than to be replaced by it.
The second line does not have the skills to challenge
Traditional second-line risk and compliance functions are staffed with people who understand credit risk, market risk, operational risk, and regulatory compliance. They are not staffed with people who can assess an ML model for bias, evaluate a training dataset for representativeness, or interpret a model's feature importance scores. When the AI team presents a model for second-line review, the review is often ceremonial rather than substantive.
The fix is not to hire an army of data scientists into the second line. It is to create a small, specialist AI risk team within second-line risk that can provide genuine technical challenge, and to embed those specialists into the AI project teams from day one rather than reviewing at a gate.
The third line cannot audit what it cannot see
Third-line audit is supposed to provide independent assurance that the governance framework is working. For AI, this means auditing the model inventory, the validation process, the monitoring framework, and the decision logs. In most firms, the audit function does not have access to the data, the tools, or the skills to audit AI systems. The result is either no AI audit at all or a procedural audit that checks whether policy documents exist without assessing whether they are followed.
The Institute of Internal Auditors has published guidance on auditing AI, and the FCA has made clear that it expects firms to have assurance over their AI deployments. The gap between expectation and capability is real.
The redesigned model
The three-lines-of-defence model for AI that works in practice has three features that distinguish it from the standard approach.
Feature 1: First-line accountability through system supervision
The first-line role changes from "process executor" to "system supervisor." The system supervisor is accountable for the AI system's outputs, understands its limitations, has the authority to override it, and captures the override rationale in structured form that feeds back to the model.
This requires:
- A clear decision rights matrix that specifies which decisions the system can make autonomously, which require human review, and which must be escalated
- Override interfaces that capture the override reason in structured fields (not free text)
- Performance dashboards that the first-line owner reviews daily, including drift indicators, override rates, and exception volumes
- Training that equips the first-line team to supervise the system rather than blindly follow or blindly ignore its outputs
The AI Enablement for Operations Leaders course covers the first-line accountability framework in Module 4 (Decision Rights) and Module 6 (Talent and Operating Model).
Feature 2: Embedded second-line risk specialists
Instead of second-line risk reviewing AI projects at a gate (usually just before deployment, when the design is expensive to change), a dedicated AI risk specialist from the second line is embedded in the project team from day one.
The embedded specialist:
- Participates in data layer design to ensure lineage and quality requirements are met
- Reviews the model methodology during development, not after
- Designs the monitoring framework alongside the AI team
- Captures validation evidence as a by-product of the build process
- Represents the second-line perspective in every design decision
This is the structural change described in the governance pillar essay. The result is that governance evidence is captured continuously rather than retrofitted, and the deployment cycle compresses from months to weeks because there is no last-minute scramble to produce evidence the team did not collect.
For further reading on the embedded governance model, the European Banking Authority's guidelines on AI use provide the supervisory perspective.
Feature 3: Continuous assurance through decision log analytics
Third-line audit shifts from periodic procedural reviews to continuous assurance based on decision log analytics. Instead of auditing whether policy documents exist, the audit function analyses the decision logs that the AI system produces as a by-product of operation:
- Are override rates within expected bands?
- Are there patterns of systematic override that suggest the model is not performing as expected?
- Are escalation thresholds being triggered and acted upon?
- Are decision logs complete, consistent, and queryable on demand?
- Can an individual decision be reconstructed end-to-end from the log?
This requires the action-data layer to be designed from the start to produce audit-ready decision logs. It also requires the audit function to have access to analytics tools that can query and visualise the decision log data.
How this applies beyond financial services
The three-lines-of-defence model originated in financial services, but the principles apply in any regulated industry where AI decisions affect people:
- In healthcare, the first line is the clinical team supervising the AI, the second line is clinical governance and patient safety, and the third line is the inspectorate (CQC in the UK, state licensing bodies in the US). MHRA expectations on clinical AI governance map directly to this model.
- In energy and utilities, the first line is the control room operating the AI-assisted dispatch, the second line is the safety case function, and the third line is the safety regulator (HSE in the UK, OSHA in the US).
- In public sector, the first line is the casework team using the AI decision support tool, the second line is the policy and ethics function, and the third line is internal audit and the National Audit Office. The UK Government AI Playbook sets the expectations.
- In life sciences, the first line is the clinical operations or pharmacovigilance team, the second line is the quality and validation function under GxP, and the third line is internal audit or an external GxP audit firm.
The structural pattern is the same in every case: first-line system supervision, embedded second-line risk partnership, and continuous third-line assurance through decision analytics.
Getting started
The practical first step is to assess how your current three-lines-of-defence model handles AI today. Our AI Enablement Maturity Diagnostic includes a governance pillar that scores your current state across the five dimensions that matter: model inventory completeness, first-line accountability clarity, second-line challenge capability, monitoring maturity, and audit coverage.
If the diagnostic reveals gaps (it usually does), the next step is a focused AI Enablement diagnostic engagement that maps the gaps to specific remediation actions and sequences them into a 12-24 month roadmap.
For the detailed mechanics, our AI Governance and Model Risk course covers the three-lines-of-defence framework in Module 4, with worked examples in financial services, healthcare, and energy.
Ready to do the structural work?
Our AI Enablement engagements are built around the five pillars in this article. We start with a focused diagnostic, then redesign one priority workflow end-to-end as proof — including the data layer, decision rights, and governance machinery.
Explore the AI Enablement serviceReady to do the structural work?
Our AI Enablement engagements are built around the five pillars in this article. We start with a focused diagnostic, then redesign one priority workflow end-to-end as proof — including the data layer, decision rights, and governance machinery.
Explore the AI Enablement serviceMore like this — once a month
Get the next long-form essay on AI enablement, embedded governance, and operating-model design straight to your inbox. One considered piece per month, written for senior practitioners in regulated industries.
No spam. Unsubscribe anytime. Read by senior practitioners across FS, healthcare, energy, and the public sector.
Related insights
AI Model Risk Management Under PRA SS1/23: What COOs and CROs Actually Need to Do
A practical guide to managing AI model risk under PRA SS1/23 for banking and insurance leaders. Covers the supervisory expectations, the model lifecycle, and the governance machinery that satisfies the PRA on first reading.
April 10, 2026AI Governance for Healthcare: Where Clinical Safety Meets Model Risk
How hospitals and health systems should govern AI deployments under MHRA, FDA, CQC, and the EU AI Act. Covers clinical decision support, diagnostic AI, and the governance model that satisfies both the regulator and the clinical governance committee.
April 06, 2026Governance That Enables AI Deployment (Instead of Blocking It)
Most enterprises experience AI governance as the function that says no. Done well, governance is what enables audit-defensible AI deployment by removing the institutional friction that normally kills enterprise initiatives — without weakening control.
April 06, 2026