The diagnostic question
In Module 1 we made the case that AI enablement is about redesigning the production function, not bolting AI onto existing workflows. The obvious next question is: how do you tell, in any specific workflow, which one you are actually doing? In our engagements we use a single diagnostic question that almost always cuts through the noise:
If you removed the AI tool tomorrow, how much would change about how the work gets done?
If the answer is "the same steps, just slower," you are in augmentation mode. If the answer is "the workflow would have to be completely rebuilt around human operators again," you are in redesign mode.
That sounds trivially simple, but in practice almost every organisation we work with thinks they are doing redesign and is actually doing augmentation. They have introduced AI in a way that feels transformative because the speed of the augmented step is genuinely better — but the workflow that the step sits inside is unchanged. Take the tool away, and the team would carry on doing essentially the same job, just less efficiently. That is the signature of augmentation.
This module is a working tool. By the end, you should be able to walk into any workflow in your organisation and tell, within 30 minutes, whether the AI work being done there is augmentation or genuine redesign.
What augmentation looks like
Augmentation has a very recognisable shape. The workflow continues to run the way it always did. The same people do the same steps in the same order. At one or two of those steps, a human operator now has an AI tool available — a copilot, a chatbot, a recommender, an autocomplete — that lets them do that specific step faster or more accurately.
Concrete examples we see all the time in financial services:
- A KYC analyst gets a tool that pre-classifies customer documents. The analyst still reviews each one, still owns the case, still feeds the result into the same downstream system. Step is faster. Workflow is unchanged.
- A regulatory reporting team gets a tool that drafts narrative commentary. An analyst still curates, edits, signs off, and submits. Step is faster. Workflow is unchanged.
- A sales rep gets call summaries auto-generated after each meeting. The CRM still has the same fields. The pipeline still moves the same way. Step is faster. Workflow is unchanged.
In all of these cases, the gain is real but bounded. The team improves a single number — average handling time, cycle time, cost per item — by 20–40%. Sometimes more. Then the gain stops. Cost-per-item plateaus. You can't squeeze any more out of the augmented step, and the surrounding workflow can't absorb the speed-up because its other constraints are still there.
There is nothing wrong with augmentation as a starting point. In fact, it is often the right starting point. The mistake is two-fold:
- Confusing it with transformation. Augmentation produces local efficiency. It does not produce structural advantage. Treating it as if it does will lead you to scale up something that doesn't compound.
- Stopping there. Many organisations spend their entire AI budget on augmentation projects, declare success when each one delivers, and never get to redesign. Then they are surprised when their AI investment doesn't show up in the macro numbers.
What redesign looks like
Redesign starts from a different question: not "where can I add AI to this workflow?" but "what would this workflow look like if it were being designed today, with AI as a native capability?"
The answer is almost always different from the current shape of the workflow. Here is what it tends to look like in practice:
- Information flows continuously, rather than in discrete batches. Inputs are processed as they arrive; the system does not wait for a human to advance the work.
- The system generates the default output for most steps. A draft, a recommendation, a routing decision, a flag — produced by the system, not by a human.
- Humans intervene at exception points. Not at every step, not on a fixed schedule, but where their judgment adds value or where accountability requires their sign-off.
- The unit of human work changes. Instead of doing one task many times (and doing it faster with AI), the human now handles the cases the system couldn't, curates the feedback that improves the system, and supervises the system's overall behaviour.
- The data flow is different. The information needed at each step is in a form the system can use, in real time, without human reconstruction.
- The KPIs are different. Throughput, exception rate, escalation rate, model confidence distributions, feedback loop velocity — not just average handling time.
This shape is recognisably different from the augmented version. If you removed the AI tomorrow, the redesigned workflow could not function — there is no one in the organisation prepared to do the work the system was doing. That is the test.
Examples in financial services:
- KYC redesigned: documents are ingested as they arrive, classified and enriched automatically, routed to analysts only when the system's confidence is below threshold, with the analyst's job structurally focused on novel and ambiguous cases. The system handles the routine 70–80% end-to-end. The team is smaller but each role is more specialised, more consequential, and harder to do.
- Regulatory reporting redesigned: data flows from source systems into a normalised reporting layer continuously; narrative drafts are generated when underlying numbers move beyond expected ranges; SMEs review the narratives and the underlying anomalies, sign off, and submit. The cycle time goes from weeks to days. The role of the SME shifts from drafting to attestation and judgment.
- Customer ops redesigned: incoming requests are categorised, enriched with customer context, matched against historical resolutions, and resolved by the system where possible. Agents handle exceptions and complex relationships. The agent role becomes structurally more demanding and more valuable.
In all three cases, the unit of change is the workflow, not the task. And in all three cases, the AI gains compound — because the system itself is now structured to learn from its own output.
The half-redesign trap
There is one specific failure pattern we see often enough that it deserves its own warning: the half-redesign.
A team starts with the right ambition. They map the workflow. They identify the AI-native version. They begin to rebuild it. Then they hit a step that requires changing something owned by another team — a system, a hand-off, a definition, a control — and they decide to "scope that out" to keep the project moving. They preserve that step in its original form and redesign everything around it.
Within months, that preserved step becomes the binding constraint. Throughput doesn't materially improve because the legacy hand-off is still there. The team concludes that the redesign didn't deliver. The redesign gets blamed for what was actually a politically-driven scoping decision. The organisation goes back to augmentation.
The lesson is sharp: if you cannot redesign the workflow end-to-end, including the parts that touch other teams and other systems, you should probably not do redesign yet. Stay in augmentation mode. Build executive sponsorship. Wait until you can do the whole thing. The half-redesign is the worst of both worlds — it costs as much as the full redesign and produces a small fraction of the value.
A working exercise
Pick one workflow in your organisation that already has AI in it somewhere. Now answer the diagnostic question honestly: if you removed the AI tomorrow, would the team carry on doing essentially the same work, just slower? If yes, you are in augmentation mode for that workflow. That is fine — but be clear about it. Don't budget for it as if it were transformation.
Then ask the redesign question: if you were starting that workflow from scratch today, with AI as a native capability, what would change? What steps would disappear? What hand-offs would become unnecessary? What decisions would the system make by default? What would the human role become?
You don't have to act on the answer yet. The goal of this module is to make sure you can see clearly which mode you are in.
What's next
In Module 3 we'll walk through a complete six-step framework for redesigning a workflow from augmentation mode to AI-native — with worked examples in customer ops, regulatory reporting, and KYC.