Module 4 of 757% through
Module 4

Three Lines of Defence for AI

How to redraw the classic 3LoD structure for AI use cases that don't fit the legacy model — and what each line actually does for an AI deployment.

Module 4 — 90-second video overview

Why 3LoD needs a refresh for AI

The three-lines-of-defence (3LoD) model is the dominant risk governance framework in financial services. It is well understood, well tested, and (in principle) compatible with AI work. In practice, AI breaks 3LoD in subtle ways that produce real governance failures if you don't address them deliberately.

This module is about how to apply 3LoD to AI properly — what each line actually does for an AI use case, where the legacy model fails, and how to fix it.

The classic 3LoD structure

For readers who want a quick refresher, the classic structure:

First line of defence — the business units that own the activities creating risk. They are responsible for identifying, managing, and reporting their risks. They run the controls. They own the outcomes.

Second line of defence — independent risk and compliance functions that provide oversight, challenge, policy, and reporting. They do not run the business activities. They provide independent assurance that the first line is managing risk properly.

Third line of defence — internal audit, which provides independent assurance that the first and second lines are doing their jobs. They review the design and operation of the control framework.

The model works because each line has clear responsibilities, the lines are independent of each other, and the structure is hierarchical: first line acts, second line oversees, third line audits.

Where AI breaks the model

AI use cases break the classic 3LoD structure in three ways.

1. Ownership ambiguity. Who is the first line owner of an AI model that was built by the central AI team for a business unit's use? The AI team built it. The business unit uses it. The AI team has the technical expertise. The business unit has the operational accountability. In most enterprises this is unclear, and the result is that nobody is fully accountable. When something goes wrong, both sides can point at the other.

2. Technical depth in second line. Independent validation of an AI model requires significant technical expertise — understanding of the algorithm, the data, the training pipeline, the monitoring stack. Most second-line risk and compliance functions don't have this expertise yet, which means second-line oversight is often theatre rather than substance.

3. The vendor problem. Many AI use cases depend on third-party models or platforms whose internals the institution cannot inspect. Classic 3LoD assumes you can validate the controls. With opaque vendors, you can't.

These failures are addressable, but only if you redesign the 3LoD application explicitly for AI. Default 3LoD won't work.

First line for AI — making ownership explicit

The first fix is to make first-line ownership of AI explicit and named. The pattern that works:

  • Every AI use case has a named first-line owner from the business unit, not from the AI team. The owner is a senior business person — head of operations, head of credit, head of fraud — whose role explicitly includes accountability for the AI's behaviour in their domain.
  • The AI team is a service provider to the first line. They build, deploy, and operate the model on behalf of the first line, but they are not the accountable owner. They are part of the first line's delivery chain.
  • The first-line owner has authority to pause or roll back the model. This is non-negotiable. Without this authority, ownership is theatre.
  • The first-line owner's job description explicitly includes monitoring the model's behaviour, reviewing override patterns, and intervening when things look wrong. This is a real change from the traditional ops director role and needs to be made explicit, with training and time allocation to match.

If you can answer the question "who owns the AI in this domain?" with a single named human who has authority, training, and time to actually do the job — your first line is in good shape. If you can't, you have governance debt.

Second line for AI — building real challenge capability

The second fix is to give second line the technical capability to provide real challenge rather than ceremonial oversight.

This usually means three things:

  • Hire or train technical second-line specialists. Model risk, data risk, and AI risk are now distinct competencies and your second line needs them. Some firms have built dedicated AI risk teams; others have upskilled existing model risk functions.
  • Independent validation must be substantively independent. Not just "a different person on the same team." Different reporting line, different incentives, with the authority to block deployment if validation fails.
  • Second line participates from the start. Embedded in the design phase, not pulled in at deployment. This is one of the largest sources of friction reduction in AI governance — the same person who would have blocked the use case at the gate can help shape it during the design.

A useful test: can your second line meaningfully challenge an AI use case on its data, algorithm, monitoring, or validation? If yes, you have real second line. If no, your second line is producing paper rather than oversight.

Third line for AI — extending audit competence

Internal audit needs to keep up with what the first two lines are doing, which means audit also needs AI competence. The minimum requirement is that audit can:

  • Walk an AI use case end-to-end and understand what each component does
  • Test the design of the controls (does the model risk file actually exist? Are SLOs defined?)
  • Test the operation of the controls (are the SLOs being met? Are overrides being reviewed?)
  • Sample individual decisions and validate that they were governed correctly
  • Recognise when something looks wrong and know what to do about it

This is not the same skill set as financial audit. Most internal audit functions are still building this capability, and the gap between what audit can do and what AI deployments require is one of the structural risks in the current state.

The vendor problem

Vendor AI is fundamentally different from in-house AI for governance purposes, because you cannot inspect the model's internals. You can only see what the vendor lets you see and what the model produces.

The pattern that works:

  • Treat vendor AI as a third-party risk question first, not a model risk question. DORA's third-party risk requirements are usually the right framing — contract terms, exit strategies, supervisory access, incident reporting.
  • Require vendor evidence. Model cards, validation reports, monitoring outputs, incident histories, SOC2/ISO certs, audit rights. If the vendor won't provide these, you can't govern the use case adequately.
  • Apply equivalent monitoring on the inputs and outputs. Even if you can't see the model, you can see what you send to it and what comes back. Monitoring those is your visibility into model behaviour.
  • Maintain a fallback. If the vendor goes down or behaves badly, what's the backup? An AI workflow that depends on a single vendor with no fallback is an operational risk.

Vendor AI is not bad. It is unavoidable. But it requires a different governance posture than in-house AI, and pretending otherwise is a common failure.

Putting it together

A working 3LoD setup for AI looks like this:

First line: Named business owner per use case. AI team operates as a delivery partner. Owner has authority, training, and time to monitor and intervene. Each use case has a defined risk treatment (per Module 3's portfolio view).

Second line: AI risk specialists embedded in the design phase. Independent validation by a dedicated function. Authority to block deployment. Real technical depth.

Third line: Audit competence in AI controls. Periodic substantive audits of high-risk use cases. Sample testing of individual decisions.

Vendor management: Third-party risk framework applied to AI vendors. Contractual evidence requirements. Fallback plans for critical vendors.

When all four are in place, governance is substantive and (counterintuitively) faster than the legacy "approval committee" model — because the work is happening continuously inside the deployment lifecycle rather than as a gate at the end.

What's next

In Module 5 we'll cover embedded governance in practice — the operational patterns that make governance live inside the workflow rather than alongside it.

Module Quiz

5 questions — Pass mark: 60%

Q1.In the classic three-lines-of-defence model, who owns the risk?

Q2.What is the first line's job for an AI deployment?

Q3.What is the second line's job for an AI deployment?

Q4.What is the third line's job for an AI deployment?

Q5.What is the most common failure mode of 3LoD applied to AI?