Module 3 of 743% through
Module 3

The Workflow Redesign Framework

A six-step framework for rebuilding any workflow from augmentation mode into a genuinely AI-native operating pattern, with worked examples in financial services.

Module 3 — 90-second video overview

A six-step framework

This module is the practical core of the course. It is the framework we use in real engagements when a priority workflow has been selected for AI-native redesign. It is deliberately analysis-heavy and framework-light, because the value is in asking the right questions — not in following a template.

The six steps are:

  1. Map the current workflow honestly
  2. Identify the decision and information flows
  3. Re-architect around continuous information processing
  4. Define the human-in-the-loop boundaries
  5. Specify the data and feedback requirements
  6. Plan the change, not just the system

We'll walk through each one, then close with a worked example in regulatory reporting.

Step 1 — Map the current workflow honestly

Start with a clean current-state map. Not the official process documentation. The actual workflow as it is run today, with the workarounds, the implicit hand-offs, the email chains, the spreadsheets that everyone pretends do not exist, and the informal escalation paths that the team uses when the official ones fail.

You can use BPMN 2.0 or any notation your team is comfortable with — what matters is being ruthless about capturing what actually happens, not what is supposed to happen. The reason this matters is that AI-native redesign almost always exposes the gap between the documented process and the lived process. If you start from the documented version, you will design a future state that doesn't address the friction your team actually deals with. The team will reject it. The redesign will fail. And it will look like AI didn't deliver, when in fact you redesigned the wrong thing.

In practice: spend time with the people doing the work. Watch them. Ask them what they do when the system goes down, when the data is missing, when the customer is unusual. That is where the real workflow lives.

Step 2 — Identify the decision and information flows

Once you have the honest map, strip the workflow down to its decision points and information flows.

  • Where does information enter the process?
  • Where is it interpreted?
  • Where does someone make a judgment call?
  • Where does that judgment turn into an action?
  • Where does that action generate new information that feeds back into the workflow?

This is the level at which AI-native redesign becomes possible. You are no longer thinking about tasks — too granular, too distracting. You are no longer thinking about outcomes — too abstract, no design leverage. You are thinking about decisions and information flows. AI is fundamentally a system for processing information and producing decisions, so this is the layer at which the redesign happens.

A useful artefact at this stage is a simplified flow that shows only: information sources → interpretation points → decision points → actions → feedback. Strip out every step that is just "moving data around." Strip out every step that is just "checking a box." What remains is the cognitive backbone of the workflow.

Step 3 — Re-architect around continuous information processing

Now ask the redesign question: what would this workflow look like if information were processed continuously, decisions were generated by default, and humans intervened only where the system couldn't or shouldn't proceed alone?

Some specific things to look for:

  • Sequential steps that could become parallel. When humans are in the loop, work is often serialised because each step needs human attention. AI doesn't have that constraint.
  • Hand-offs that become unnecessary. Many hand-offs exist because one team has information another team needs. If the system can route information automatically, the hand-off can be eliminated.
  • Decisions that become defaults. Many human decisions are pattern-matching exercises with low variance — exactly what AI handles well, with the human moving to an "approve, reject, or override" posture.
  • Exception handling that becomes the human's primary job. In an AI-native workflow, the human role often shifts from "do the work" to "handle the cases the system couldn't, and improve the system based on what it learned."

This is the point in the redesign where the workflow stops looking like the old one with AI bolted on, and starts looking like a different operating pattern entirely.

Step 4 — Define the human-in-the-loop boundaries

This step is critical and almost always skipped. AI-native does not mean human-out-of-the-loop. It means humans are in the loop intentionally — where their judgment, accountability, or context add real value, and not where they don't.

For each remaining human touchpoint, ask: what specifically is the human being asked to contribute?

  • If the answer is "review and approve," the contribution is accountability, not judgment. The workflow should be designed to support that — fast review interfaces, clear audit trails, escalation paths for disagreement, and metrics that track the override rate over time so you can see when human review is adding value and when it has become a rubber stamp.
  • If the answer is "exercise judgment in ambiguous or high-stakes situations," the contribution is judgment. These touchpoints should be designed differently: more context, more time, fewer interruptions, and the system should actively prepare the human for the decision rather than just dumping it on them.
  • If the answer is "the regulator requires a human in this loop," the contribution is regulatory compliance. Be honest about that. Don't pretend the human is doing judgment when they're really doing a sign-off.

For regulated industries, this is also the point where the model risk and governance work intersects directly with workflow design. The places where humans intervene are usually the places where audit, compliance, and supervisory expectations land. If you design these touchpoints well, governance becomes an accelerator of AI deployment. If you design them badly, governance becomes the brake everyone fears.

Step 5 — Specify the data and feedback requirements

Translate the redesigned workflow into specific data requirements. What does the system need to know, and when? What does it need to capture, and how? Where does the output of one decision become the input to the next? What feedback loops need to exist so the system learns from its own outcomes?

This is where the workflow redesign meets the data layer (which we'll cover in detail in Module 5). If the data infrastructure cannot support the redesigned workflow, you have a choice:

  • Scope down the redesign. Faster, cheaper, but lower ceiling — and risks producing a half-redesign.
  • Invest in the data layer alongside the workflow rebuild. More work, longer timeline, but the only path that actually compounds.

We almost always recommend the second option in serious engagements, because the first leads to the same kind of compromised, non-compounding deployment that the redesign was meant to avoid in the first place.

Step 6 — Plan the change, not just the system

The hardest part of AI-native redesign is rarely the technology. It is the organisational shift required to operate the new workflow. Roles change. Decision rights change. Performance metrics change. Career paths change. Reporting lines often change.

A workflow redesign that doesn't include the change-management work is a workflow redesign that will get rejected by the organisation — or, worse, accepted on paper and then quietly subverted in practice.

The change plan should answer four questions:

  1. Who owns the redesigned workflow end-to-end? Diffuse ownership is the most common failure mode.
  2. How do existing roles change? Be specific. "Operator" becomes "exception handler and feedback curator" is a real job change with training, hiring, and incentive implications.
  3. What does success look like, and how is it measured? If the metrics still reward the old behaviours, the old behaviours will return.
  4. How do you handle the transition period? AI-native workflows usually need to run alongside legacy versions for some time. This is harder than it sounds.

Worked example: regulatory reporting

To make this concrete, here is what the framework looks like applied to a regulatory reporting workflow at a mid-sized European bank we worked with.

Step 1 (Honest map): 12 source systems, 8 spreadsheets, 4 manual validation steps, 2 escalation paths, 3 sign-off committees, and an end-of-quarter sprint that consumed roughly 60% of the team's time. The official documentation showed 6 steps. The actual lived workflow had 31.

Step 2 (Decision and information flows): The cognitive backbone reduced to 7 decision points: data validity, classification, threshold checks, anomaly detection, narrative generation, materiality judgment, and final attestation. Everything else was data movement.

Step 3 (AI-native re-architecture): The 7 decision points stayed; the 24 data-movement steps collapsed into a continuous data layer. Anomaly detection and narrative generation became system defaults. Materiality judgment and attestation stayed human, with the human's job shifting from "build the report" to "challenge the system's draft and own the sign-off."

Step 4 (Human-in-the-loop boundaries): Materiality and attestation are judgment-bound and regulator-bound. They got more time, more context, and a structured challenge interface. Classification became an approve/override interface with feedback going back into the model.

Step 5 (Data requirements): A normalised reporting layer with continuous freshness from the 12 source systems. A feature store for the anomaly model. A feedback log linking SME overrides back to the model. None of this existed; building it was the bulk of the engagement.

Step 6 (Change plan): The team shrank from 18 to 11. Three roles disappeared (data wranglers). Five new roles emerged (model curators, reporting SMEs, attestation owners). The end-of-quarter sprint disappeared. Cycle time went from 14 days to 3.

This is what redesign looks like. It is not "we added an AI tool to the existing process." It is a different operating pattern.

What's next

In Module 4 we will go deep on decision rights in an AI-native operating model — how to redraw who decides what, where the system decides by default, and where humans must intervene.

Module Quiz

5 questions — Pass mark: 60%

Q1.What is the FIRST step in the workflow redesign framework?

Q2.Why is it dangerous to start redesign from the documented version of a process?

Q3.What is the second-to-last step (Step 5) in the framework?

Q4.When mapping decision and information flows (Step 2), what are you trying to find?

Q5.Why must the redesign include change management (Step 6), not just system design?