Skip to main content
AI Enablement

How to Scope an AI Enablement Engagement: What Senior Leaders Should Ask Before Signing

April 04, 2026
How to Scope an AI Enablement Engagement: What Senior Leaders Should Ask Before Signing

If you are a COO, CRO, or CTO in a regulated industry and you are about to scope an AI enablement engagement, this post is for you. Not the vendor pitch. Not the capability deck. The honest buyer's guide to what you should ask, what you should look for, and what the red flags are that the firm you are evaluating cannot actually do the structural work.

We publish this because the quality of scoping determines the quality of outcome. A well-scoped engagement produces a defensible operating model redesign that compounds. A poorly-scoped engagement produces a slide deck that sits on a shelf. The difference is visible in the first four weeks.

The five questions to ask before you engage

1. "What is your unit of change?"

If the answer is "we implement AI tools" or "we deploy models," that is augmentation work, not enablement work. The unit of change in AI enablement is the workflow, not the technology. The firm should be able to explain how they map the current-state workflow (in BPMN 2.0 or equivalent), identify the structural redesign opportunities, and rebuild the workflow around AI as a native capability.

Ask to see a sample current-state map and future-state map from a previous engagement. If they cannot produce one, they have not done the work.

2. "How do you handle governance?"

If governance is a separate workstream that runs in parallel to the delivery work, that is a red flag. Governance should be embedded in the workflow design from day one, with second-line risk specialists participating in the design conversation rather than reviewing at a gate.

Ask specifically: "How do you satisfy PRA SS1/23 model risk expectations?" or the equivalent for your sector (MHRA for healthcare, EU AI Act for cross-border). If the firm waves at governance as an add-on, they will produce a system that is not defensible under supervisory review.

3. "What happens to the operating model?"

AI enablement is not a technology project. It is an operating model redesign. The workflow changes, the data layer changes, the decision rights change, and the roles change. If the firm does not have a clear view on how the operating model evolves, they are selling technology implementation, not enablement.

Ask: "What roles will be different after the engagement?" If the answer is "none," the engagement is not enablement.

4. "How do you measure compounding?"

The purpose of AI enablement is to create structural improvements that compound over time, not one-time efficiency gains that plateau. The firm should be able to explain the data flywheel mechanism: how the system gets better quarter over quarter through structured feedback from production operations.

Ask to see the performance trajectory from a previous engagement. If the gains flattened after 12 months, the engagement produced augmentation, not enablement.

5. "What do you leave behind?"

At the end of the engagement, your team should own the redesigned workflow, the data layer, the governance framework, the decision rights matrix, and the operating model design. Knowledge transfer should be built into the engagement structure, not an afterthought. If the firm's business model depends on you needing them forever, that is a misalignment.

Ask: "In 18 months, will we need you to run this?" The right answer is no.

The engagement shapes that work

Based on our experience and the patterns we see working across banking, insurance, asset management, healthcare, energy, and public sector, there are four engagement shapes that produce reliable outcomes:

Diagnostic (4-6 weeks). An honest current-state assessment of one priority value stream. Produces the AI portfolio audit, the regulatory triage, the current-state workflow map, and a defensibility memo. The diagnostic is the "do we have a real opportunity?" test. It should be priced as a standalone deliverable that has value even if you do not continue. Our pricing page describes how we scope diagnostics.

Strategy and Blueprint (10-14 weeks). The diagnostic plus the future-state design: operating model blueprint, redesigned workflow, data architecture, decision rights matrix, governance framework, and a sequenced implementation roadmap. This is the engagement shape that most McKinsey, BCG, and Tier 1 advisory firms offer. The difference is whether the blueprint is a slide deck or a working specification that an implementation team can build from.

Transformation Programme (9-24 months). Blueprint plus hands-on delivery: embedded practitioners leading the workflow rebuild, data layer implementation, governance instrumentation, and the change programme including role redesign and team retraining. This is where the structural value actually lands. It is also where most consulting firms fail, because delivery requires a different skill set than advisory.

Executive Advisory Retainer (ongoing). Senior advisory access for firms already executing on a strategy. Monthly working sessions, ad-hoc reviews, and support for the executive sponsor. The retainer is the shape that works when you have the internal team but need a senior external perspective to challenge the work and keep it honest.

The red flags

Watch for these in the scoping process:

"We can start next week." Serious enablement work requires scoping, and scoping requires understanding your operating model, your regulatory environment, and your political complexity. A firm that can start immediately has either pre-sold you a generic package or does not understand the scope.

"Our platform does this." AI enablement is operating model work, not platform implementation. A firm that leads with its own technology is selling a product, not a redesign. Vendor-agnostic advice is essential because the technology choice should follow the workflow design, not drive it.

"We guarantee X% improvement." Outcomes depend on your data maturity, your regulatory environment, your political complexity, and the quality of implementation. Any firm that guarantees a specific percentage improvement before the diagnostic is promising something they cannot control. The honest firms say: "We have seen 20-35% structural cost reduction in comparable engagements, but your outcome will depend on factors we need to assess."

No regulatory fluency. If the firm cannot discuss PRA SS1/23, FCA Consumer Duty, EU AI Act, MHRA, or the equivalent for your sector with the same fluency they discuss the technology, they will produce a system that is not supervisory-defensible.

No previous operating model work. Ask for examples of previous engagements where the firm redesigned workflows, changed decision rights, redesigned roles, and rebuilt the data layer. If the portfolio is all technology implementation with no operating model change, the firm can build but it cannot redesign.

How to evaluate the ROI

Before engaging, use our AI Enablement ROI Calculator to model the 5-year cost gap between your current operating model and an AI-enabled one, and to compare building the capability in-house vs. engaging a specialist. The calculator uses conservative assumptions and is calibrated to the outcomes we have observed across real engagements.

For a more detailed assessment, the AI Enablement Maturity Diagnostic scores your current state across the five enablement pillars and identifies where the binding constraint sits. This is useful for the scoping conversation because it gives both sides a common language for where the work should focus.

For procurement teams managing the RFP process, the UK Government's guidelines on procuring AI provide a useful framework even for private-sector firms.

The right engagement, properly scoped, is one of the highest-ROI investments a regulated firm can make. The wrong engagement, poorly scoped, is a write-off that sets the AI programme back two years. The scoping conversation is where you find out which one you are buying.

Ready to do the structural work?

Our AI Enablement engagements are built around the five pillars in this article. We start with a focused diagnostic, then redesign one priority workflow end-to-end as proof — including the data layer, decision rights, and governance machinery.

Explore the AI Enablement service

Ready to do the structural work?

Our AI Enablement engagements are built around the five pillars in this article. We start with a focused diagnostic, then redesign one priority workflow end-to-end as proof — including the data layer, decision rights, and governance machinery.

Explore the AI Enablement service
Monthly newsletter

More like this — once a month

Get the next long-form essay on AI enablement, embedded governance, and operating-model design straight to your inbox. One considered piece per month, written for senior practitioners in regulated industries.

No spam. Unsubscribe anytime. Read by senior practitioners across FS, healthcare, energy, and the public sector.