đ 1. Microsoft Copilot Isnât Just GPT-Powered Anymore
Until recently, Microsoft 365 Copilot and Copilot Studio leaned heavily on OpenAIâs GPT models (including the latest versions like GPT-5). Now, Anthropicâs Claude models (such as Claude Sonnet 4 / 4.5 and Claude Opus 4.1) have been added as optional model choices in key Copilot experiences.
This represents a shift from a single-model focus toward a multi-LLM (large language model) strategy, giving organisations more options for how AI does work inside Copilot.
Anthropic models are AI language models built by a company called Anthropic.
Just like:
- GPT models are built by OpenAI
- Gemini models are built by Google
âŚClaude models (Sonnet, Opus, etc.) are built by Anthropic.
So when you see the phrase âAnthropic modelsâ, it simply means ‘AI models created and maintained by Anthropic, rather than by OpenAI or Microsoft.’
đ§ 2. Where Anthropic Models Are Available
There are two principal places youâll now (since end of 2025) see Anthropic models within Microsoft Copilot tooling:
1) Microsoft 365 Copilot Researcher Agent
- The Researcher experience in Microsoft 365 Copilot (used for deep, multistep research across your content like emails, chats, files) can now run on either OpenAIâs reasoning models or Anthropicâs Claude Opus 4.1.
- Users can opt in and switch between model options for specific reasoning tasks.
2) Microsoft Copilot Studio Custom Agent Development
- Copilot Studioâs agent authoring experience now supports selecting Claude Sonnet and Claude Opus models for custom AI Agents.
- This lets developers and consultants build agents that use Anthropic models for reasoning, orchestration, workflows and automation tasks.
Important: Anthropic models are currently preview / external models in Copilot Studio and must be enabled by tenant admins. They arenât yet the default in production experiences.
đ 3. Admin Controls & Data Considerations
Because Anthropicâs Claude models are hosted outside Microsoftâs managed environments, your data processing and compliance boundaries can differ from Microsoftâs native models:
- Admins must opt in to enable Anthropic access at the tenant level.
- In Copilot Studio, you control access per environment and can manage whether Anthropic models are available to makers.
- Since Claude models run outside Microsoftâs compliance framework, data handling is subject to Anthropicâs terms of service and data processing agreements, not Microsoftâs default enterprise protections.
This is not inherently negative, but it does change how enterprise risk, compliance, and data governance are framed when you choose external models.
đ§Š 4. Why This Matters for Business & Consultants
Hereâs what your stakeholders should understand:
đ Expand Model Choice = More Flexibility Not all business tasks are best served by a single model. Some models are stronger at certain reasoning types. e.g., long content summarisation, multi-step analysis, or agentic workflows. Anthropic models bring alternative strengths into Copilot that you can choose instead of defaulting to OpenAI only.
đ Tailored Agent Behaviour In Copilot Studio, choosing a different model isnât just cosmetic, it affects how an agent reasons, executes workflows, and handles complex queries. That gives consultants a lever to fine-tune outcomes for specific business problems.
âď¸ Compliance & Risk Changes Because external models are handled outside Microsoftâs data governance perimeter, organisations must think about compliance and data residency differently if they want to enable Anthropic models.
đ§Ş Preview Gap Means Caution Anthropic models are currently in preview/preview environments for many customers and may not yet be suitable as the primary production model, especially for regulated industries or high-compliance use cases.
đŁ 5. What to Communicate to Your Teams / Clients
When youâre briefing stakeholders on this shift:
âď¸ Choice is now a feature, not just performance. Different tasks may benefit from different models.
âď¸ Copilot remains Copilot, but itâs more flexible. The core orchestration still manages how prompts, tools and plugins interact; the model choice is one part of that pipeline.
âď¸ Itâs early days, preview and governance matter. Plan pilots around use case performance, compliance impact, and cost (not just novelty)
.
đ§ 6. Practical Example Scenarios
Use Case: Deep Research Agent Rather than the default reasoning model, an analysis or market research agent could run on Claude Opus 4.1 to see if it produces different reasoning flows on long, structured content.
Use Case: Automated Workflow Agent A custom Copilot Studio agent built for workflow automation might test whether Claude Sonnet handles multi-step orchestration logic more effectively for specific enterprise tasks.
Each choice should be validated with metrics, business outcomes, and governance checkpoints, not assumed based on brand or performance benchmarks.


Leave a Reply