🚀 1. Microsoft Copilot Isn’t Just GPT-Powered Anymore

Until recently, Microsoft 365 Copilot and Copilot Studio leaned heavily on OpenAI’s GPT models (including the latest versions like GPT-5). Now, Anthropic’s Claude models (such as Claude Sonnet 4 / 4.5 and Claude Opus 4.1) have been added as optional model choices in key Copilot experiences.

This represents a shift from a single-model focus toward a multi-LLM (large language model) strategy, giving organisations more options for how AI does work inside Copilot.

Anthropic models are AI language models built by a company called Anthropic.

Just like:

  • GPT models are built by OpenAI
  • Gemini models are built by Google

…Claude models (Sonnet, Opus, etc.) are built by Anthropic.

So when you see the phrase “Anthropic models”, it simply means ‘AI models created and maintained by Anthropic, rather than by OpenAI or Microsoft.’


🧠 2. Where Anthropic Models Are Available

There are two principal places you’ll now (since end of 2025) see Anthropic models within Microsoft Copilot tooling:

1) Microsoft 365 Copilot Researcher Agent

  • The Researcher experience in Microsoft 365 Copilot (used for deep, multistep research across your content like emails, chats, files) can now run on either OpenAI’s reasoning models or Anthropic’s Claude Opus 4.1.
  • Users can opt in and switch between model options for specific reasoning tasks.

2) Microsoft Copilot Studio Custom Agent Development

  • Copilot Studio’s agent authoring experience now supports selecting Claude Sonnet and Claude Opus models for custom AI Agents.
  • This lets developers and consultants build agents that use Anthropic models for reasoning, orchestration, workflows and automation tasks.

Important: Anthropic models are currently preview / external models in Copilot Studio and must be enabled by tenant admins. They aren’t yet the default in production experiences.


🛠 3. Admin Controls & Data Considerations

Because Anthropic’s Claude models are hosted outside Microsoft’s managed environments, your data processing and compliance boundaries can differ from Microsoft’s native models:

  • Admins must opt in to enable Anthropic access at the tenant level.
  • In Copilot Studio, you control access per environment and can manage whether Anthropic models are available to makers.
  • Since Claude models run outside Microsoft’s compliance framework, data handling is subject to Anthropic’s terms of service and data processing agreements, not Microsoft’s default enterprise protections.

This is not inherently negative, but it does change how enterprise risk, compliance, and data governance are framed when you choose external models.


🧩 4. Why This Matters for Business & Consultants

Here’s what your stakeholders should understand:

🌐 Expand Model Choice = More Flexibility Not all business tasks are best served by a single model. Some models are stronger at certain reasoning types. e.g., long content summarisation, multi-step analysis, or agentic workflows. Anthropic models bring alternative strengths into Copilot that you can choose instead of defaulting to OpenAI only.

📊 Tailored Agent Behaviour In Copilot Studio, choosing a different model isn’t just cosmetic, it affects how an agent reasons, executes workflows, and handles complex queries. That gives consultants a lever to fine-tune outcomes for specific business problems.

⚖️ Compliance & Risk Changes Because external models are handled outside Microsoft’s data governance perimeter, organisations must think about compliance and data residency differently if they want to enable Anthropic models.

🧪 Preview Gap Means Caution Anthropic models are currently in preview/preview environments for many customers and may not yet be suitable as the primary production model, especially for regulated industries or high-compliance use cases.


📣 5. What to Communicate to Your Teams / Clients

When you’re briefing stakeholders on this shift:

✔️ Choice is now a feature, not just performance. Different tasks may benefit from different models.

✔️ Copilot remains Copilot, but it’s more flexible. The core orchestration still manages how prompts, tools and plugins interact; the model choice is one part of that pipeline.

✔️ It’s early days, preview and governance matter. Plan pilots around use case performance, compliance impact, and cost (not just novelty)



.


🧭 6. Practical Example Scenarios

Use Case: Deep Research Agent Rather than the default reasoning model, an analysis or market research agent could run on Claude Opus 4.1 to see if it produces different reasoning flows on long, structured content.

Use Case: Automated Workflow Agent A custom Copilot Studio agent built for workflow automation might test whether Claude Sonnet handles multi-step orchestration logic more effectively for specific enterprise tasks.

Each choice should be validated with metrics, business outcomes, and governance checkpoints, not assumed based on brand or performance benchmarks.


TL;DR — In Business Terms

| Aspect         | What Changed                                                                                  |
| -------------- | --------------------------------------------------------------------------------------------- |
| Model Options  | Copilot now supports Anthropic Claude models *in addition* to OpenAI models. ([Microsoft][1]) |
| Where          | Microsoft 365 Copilot Researcher + Copilot Studio agents. ([Microsoft Learn][2])              |
| Requirement    | Admin opt-in, preview status, external data processing. ([M365 Admin][3])                     |
| Why It Matters | Flexibility for building and tailoring AI experiences — with governance awareness.            |

[1]: https://www.microsoft.com/en-us/microsoft-365/blog/2025/09/24/expanding-model-choice-in-microsoft-365-copilot/?utm_source=chatgpt.com "Expanding model choice in Microsoft 365 Copilot | Microsoft 365 Blog"
[2]: https://learn.microsoft.com/en-us/microsoft-copilot-studio/authoring-select-external-response-


Leave a Reply

Your email address will not be published. Required fields are marked *