All insights

Copilot · 16 May 2026 · 7 min read

What AI models power Microsoft 365 Copilot?

A practical explanation of the AI models behind Microsoft 365 Copilot, why the answer changes over time and what leaders should actually care about.

Author James Wilkinson

The honest answer is: Microsoft 365 Copilot uses more than one model, and the exact model behind a response can change.

That is not Microsoft being evasive. It is how modern enterprise AI products work. Copilot sits across chat, Word, Excel, PowerPoint, Outlook, Teams, agents, search and other Microsoft 365 experiences. Different tasks can call for different model behaviour.

Microsoft’s application card says Microsoft 365 Copilot uses a variety of AI models, including models from Azure OpenAI Service and Anthropic. It also points customers to model cards and data summaries for more detail: Microsoft 365 Copilot application card.

The model is only one part of Copilot

It is tempting to ask “is Copilot using GPT-5?” and stop there. That is understandable, but incomplete.

The value of Microsoft 365 Copilot comes from the system around the model:

  • The user’s prompt
  • The Microsoft 365 app they are working in
  • The files, emails, meetings and chats the user has permission to access
  • The retrieval and grounding layer that brings relevant context into the response
  • The safety, compliance and tenant controls around the experience
  • The model or models selected for the task

In other words, Copilot is not just a chatbot with a Microsoft logo. It is an orchestration layer across Microsoft 365.

What about GPT-5 and GPT-5.2?

Microsoft’s Copilot Chat guidance currently says Copilot Chat includes access to models such as GPT-5, and Microsoft’s release notes have described GPT-5.2 availability in the Copilot Chat model selector. Model availability changes over time, so the safest operational rule is this: do not build your rollout plan around a single model name.

Build around use cases, controls and user behaviour.

The model matters, of course. Better reasoning, better instruction following and better context handling all improve the experience. But for most organisations, the bigger blocker is not whether the model is one version ahead. It is whether the user knows what to ask, whether the source material is reliable and whether Copilot has permission to reach the right content.

What leaders should care about

If you are responsible for Copilot adoption, the useful questions are:

  • What data can Copilot reach for this user?
  • Is the SharePoint structure clean enough to ground answers properly?
  • Are permissions based on current job roles or historical convenience?
  • Do staff understand that Copilot output still needs review?
  • Are people using Copilot for repeat workflows or just occasional novelty prompts?
  • Can you measure whether the tool is changing work?

Those questions are less glamorous than model names, but they decide whether Copilot is useful.

What staff should be told

Staff do not need a lecture on model architecture before they can use Copilot well. They need a simple mental model:

Copilot uses advanced AI models, but it is only as useful as the instructions and context you give it. It can help draft, summarise, compare and organise. It can also be confidently wrong, miss context, over-smooth nuance or use the wrong source if your information estate is messy.

That is why good Copilot training should include:

  • How to give context
  • How to ask for structure
  • How to ask Copilot to show its assumptions
  • How to check output against source material
  • When not to use Copilot

The practical answer

So, what AI models power Microsoft 365 Copilot?

As of May 2026, Microsoft describes Copilot as using multiple AI models, including models from Azure OpenAI Service and Anthropic, with GPT-family models visible in current Copilot Chat guidance and release notes. The exact model can vary by experience, tenant, feature and timing.

For adoption purposes, that is enough detail for most leaders.

The better question is not “which model is it?” The better question is “what work are we asking Copilot to improve, and have we given people the training, permissions and content structure to use it properly?”

That is why model questions should feed into Copilot adoption strategy and, where the content estate is messy, SharePoint readiness work before a wider rollout.