All insights

Adoption · 8 May 2026 · 3 min read

You've got Copilot. Here's the adoption gap nobody talks about.

The licence count looks fine. The training was delivered. So why is nobody using it? Three quiet failure modes I see in nearly every stalled Copilot rollout.

Author James Wilkinson

Most of the conversations I have with leaders of professional services firms about Copilot follow the same arc.

It starts confident. “We have rolled it out.” Then a pause. “Well, we’ve licensed it. We did the training. The team is using it.”

I ask how they know. Long silence. Sometimes a slightly defensive answer about how it’s hard to measure. Sometimes an admission that the usage data is, honestly, embarrassing.

There is a gap that opens up in nearly every Copilot rollout I am asked to look at. It is not a gap of strategy or budget. It is a quiet, structural gap between the version of adoption that the slide deck describes and the version that is actually happening at desks. Three things, repeatedly, are what cause it.

One: the prompts people learned in training don’t match the work they actually do

Most Copilot training I have seen (including, in fairness, some I delivered earlier in my career) relies on example prompts. “Summarise this meeting.” “Draft a reply to this email.” “Write a project plan for this brief.”

The prompts work in the room. They are designed to. The problem is that nobody in the room does generic work. Real work is “I need to write to this specific client about this specific issue without making the partnership relationship worse, and I need to do it in the next forty minutes.” Generic prompts don’t help with that. They help people see what Copilot can do, which is useful for about a week, and then they hit the wall of actual work and find that the example prompts don’t translate.

The fix is not better prompts. The fix is training that uses the actual work. The email someone is actually trying to send, the bid they are actually trying to win, the file they are actually trying to find. That requires a slower, less scalable form of training, which is why most firms don’t do it.

Two: there is no second line of support

Person tries Copilot. It does something slightly wrong. Person doesn’t have a quick way to find out why. Person concludes Copilot is unreliable. Person goes back to doing it manually.

This sequence plays out, in micro, hundreds of times a week in firms that have technically rolled out Copilot. Each individual instance is small. The cumulative effect is decisive: it is the difference between a firm where Copilot is part of how work gets done and a firm where Copilot is the thing that occasionally helps with email subject lines.

The fix is not training documentation, although that helps. The fix is a real, human, accessible person (internal or external) whose job is to answer “why didn’t this work” within hours, not weeks. In most firms the answer to “why didn’t this work” goes through IT, and IT has too many other things on. So nothing happens. So usage tails off.

Three: nobody has decided what good looks like

Ask a firm what good Copilot usage looks like at month six and you will mostly get vague answers. “People using it more.” “Some efficiency gains.” “It’s part of the workflow.”

These are not measures. They are hopes.

The firms where adoption actually compounds have answered the question concretely. Sometimes the answer is a number. Copilot is used in 80% of client-meeting follow-ups, or has reduced average proposal turnaround by a day, or has cut x hours out of compliance reviews. Sometimes it is a description. “Every analyst should be using Copilot daily for first-pass drafts.” The specifics matter less than the fact that somebody decided. The decision is what allows you to know whether what you are doing is working, and what allows you to fix it when it isn’t.

A firm that has not defined what good looks like has, in effect, made it impossible to know whether the rollout is succeeding. Which means it is impossible to course-correct. Which means it almost certainly isn’t.

The honest summary

If your Copilot rollout has gone quiet, the problem is almost never the technology. The technology is genuinely impressive now in a way it wasn’t twelve months ago. The problem is usually one or more of these three: training disconnected from work, no fast support loop, no definition of success.

Each of them is fixable. None of them is fixed by buying more licences.

That is the work behind a proper Copilot adoption plan and Copilot training programme: clear use cases, live support and a way to tell whether habits are actually changing.