Microsoft 365 Copilot is a third-party data processor with the access scope of a privileged employee — and most enterprises are not pricing the exposure on the right side of the ledger.
Why M365 Copilot Per-Tenant Policy Matters Now
Copilot is the most quietly privileged software running inside the average enterprise tenant. By design, it inherits the user’s full M365 OAuth scope: every SharePoint document the user can read, every Teams chat they belong to, every Outlook thread they can search. There is no per-tool, per-resource policy gate between Copilot and the user’s identity envelope. The product works because it has that access. The accessibility is the marketing.
The control gap is that Copilot then renders, summarizes, and exfiltrates that data into a model pipeline. From a regulatory standpoint, Copilot is a data processor acting on the customer’s data. From a procurement standpoint, it does not appear on the third-party risk register at most enterprises — Microsoft is on the register, but Copilot’s specific processing pattern usually is not. This is the asymmetry.
| Exposure Vector | Magnitude | Source |
|---|---|---|
| SharePoint sites a typical user can technically read in a mature tenant | Hundreds, often by default | Microsoft documentation on tenant-wide sharing defaults |
| Breach cost premium when an AI assistant reaches overshared content | +$670K | Ospiri research, 2026 |
| Window before AI-assistant exposure compounds into a board-level incident | 12–18 months | Ospiri research, 2026 |
| Enterprises with Copilot rolled out faster than their TPRM cycle | Vast majority, in active deployments | Ospiri field observation |
So the question is not whether Copilot is secure. It is whether the existing control plane prices the new exposure at all.
Copilot is Not Just a Feature. It is a Per-User Data Processor.
The typical procurement frame treats Copilot as a Microsoft 365 feature — same SLA, same DPA, same risk profile as Word or Outlook. That frame breaks the moment you separate the application binary from what it actually does on the endpoint.
| Dimension | Word / Outlook | M365 Copilot |
|---|---|---|
| Reads user mailbox and files | On explicit user action | Autonomously, on every prompt |
| Sends content to a model | No | Yes, into a hosted inference pipeline |
| Inherits user OAuth scope | N/A | Full scope, by default |
| Has a per-tool, per-resource policy boundary | Yes (Outlook rules, SharePoint perms) | No native action-level policy gate |
| Logged as a data processor on TPRM register | N/A | Usually no |
| Blast radius when an over-permissioned user is breached | The user’s session | The user’s session, plus every document Copilot has retrieved across the tenant in the prompt window |
Read the bottom row twice. The blast radius is materially different.
The Three Failure Modes Most Tenants Already Have
These patterns surface in nearly every tenant audit, and none of them are exotic:
-
Oversharing exposure surfaced by Copilot enumeration. A SharePoint site set to “Everyone except external users” — the Microsoft default for years — was readable by an employee with no business reason to see it. Copilot, prompted with “summarize what’s new on the legal site,” returns the entire library. The misconfiguration was always there. The agent is the vehicle that makes it productive at machine speed.
-
OAuth scope creep without an attestation review. Copilot is licensed per user. Each new license adds a fresh attack surface — full mailbox, full OneDrive, full Teams membership — without an identity governance review. If your IGA tooling does not trigger an attestation when Copilot is enabled for a user, the scope was never priced.
-
TPRM register staleness. Copilot’s data flow is not on the register because it was procured as a feature SKU under the existing Microsoft master agreement. Auditors are increasingly flagging this gap under GDPR Article 5 (purpose limitation), CCPA’s “service provider” definitions, and the EU AI Act’s high-risk processing categories. The DPIA your DPO signed in 2022 did not contemplate per-prompt cross-resource retrieval.
The Risk Score for Copilot Exposure
Copilot Exposure = (User OAuth Scope × Sensitivity of Reachable Data) + (Prompt Frequency × Drift Coefficient)
| Factor | What to measure | Why it matters |
|---|---|---|
| User OAuth Scope | Number and sensitivity of resources the user’s token can reach | The denominator of blast radius |
| Sensitivity of Reachable Data | Share of reachable content carrying Purview labels above Internal | The numerator of regulatory exposure |
| Prompt Frequency | Median weekly Copilot prompts per user | How fast the exposure accrues |
| Drift Coefficient | Rate at which the user’s prompts are reading new resources versus baseline | The leading indicator of permission rot |
Two enterprises with the same Copilot license count can have a tenfold difference in this score. The license count is the wrong unit of risk. Mark-to-market the score quarterly and you stop sizing Copilot exposure by headcount.
What the Architectural Answer Looks Like
The honest read is that Microsoft cannot solve this for you at the SaaS layer alone. Purview and SharePoint Advanced Management improve the data side. Conditional Access tightens the identity side. Defender for Cloud Apps inspects in transit. None of those is a per-action, per-resource policy gate that fires after Copilot has resolved a prompt and before it reaches the file. That gate has to live at the kernel scope on the endpoint where the user runs Office.
| Control Point | What It Sees | What It Can Enforce |
|---|---|---|
| Microsoft Purview labels | Data at rest, with metadata | Restrict label-tagged content from inference flows it can recognize |
| Conditional Access | Identity, device posture | MFA, compliant device, location |
| API-layer DLP (Defender for Cloud Apps, Microsoft Purview, Forcepoint) | Tenant-to-tenant traffic | Pattern-based block or redact in transit |
| Prompt guardrails (Lakera, Protect AI) | The prompt itself | Filter unsafe prompt patterns before submission |
| Kernel-scope agent firewall | The actual file read, the actual API call, the actual write | Per-tool, per-resource policy after the prompt has resolved |
The first four are necessary. The fifth is the missing primitive. It is the only layer that prices the action — not the intent.
This is not a critique of Microsoft’s stack. It is an observation that the control plane in the agent era cannot be a SaaS-layer concern alone, because the agent is, by design, executing as the user. The only place to enforce a policy distinct from the user’s identity is below the user — at the kernel.
What CISOs Should Do This Quarter
| Step | Action | Output | Effort |
|---|---|---|---|
| 1 | Add Copilot to the TPRM register as a per-user data processor with its own DPIA | TPRM line item, audit-ready | 1 week |
| 2 | Run a Purview oversharing audit before any further Copilot rollout | Tenant-level remediation backlog | 2 weeks |
| 3 | Compute the Copilot Exposure score for the top 10% riskiest users by scope and sensitivity | Risk-tiered user list, board-ready | 1 sprint |
| 4 | Pilot a kernel-scope agent firewall on those users’ endpoints | Per-action policy enforced before file read | 4 weeks |
The Bottom Line
Copilot is a third-party data processor wearing the colors of a productivity feature, and the TPRM register has not caught up. The OAuth scope inheritance, the absent per-action policy gate, and the procurement-pattern blind spot are not three problems — they are the same architectural problem expressed at three layers of the stack. The control plane has to migrate down to the kernel because that is the only layer that sees what the agent actually does, not what it was asked to do. Treat Copilot as a privileged employee, not a feature, and the rest of the policy follows.
If your team is sizing Copilot governance for the FY27 budget cycle, request a working session. We will walk through your tenant, compute the Copilot Exposure score for a sample of your users, and scope a kernel-scope deployment. 90 minutes.