Research

AI Agents

AI Strategy

Compliance is a system property, not a checkbox. What three weeks of Copilot news reveal about how enterprises should buy AI.

Article by

Dr. Anoj Winston Gladius

·

Three news items in three weeks tell a story that no single one of them tells alone. GitHub announces that Copilot is moving to usage-based billing because agentic workloads are blowing up the existing pricing. Microsoft enables flex routing on Microsoft 365 Copilot, by default, sending European customer inference to US, Canadian or Australian data centres when EU capacity runs short. And Microsoft's own Terms of Use, updated quietly in October last year, instruct users not to rely on Copilot for important advice — that the product is "for entertainment purposes only." Read together, these are not three unrelated stories. They reveal something structural about what enterprises are actually buying when they buy a hyperscaler-bundled AI assistant. And they sharpen the question European companies should be asking right now: what does enforceable AI compliance actually look like, when the model is only one layer of the system?

This is the fourth piece in a series I have been writing for neuland.ai over the past months. Each one has tried to push the same underlying argument forward: that the value, the risk and the moat in enterprise AI are not located where most of the public discussion places them. They are not in the model. They are in the layer above and around it. [¹]

In February I wrote about the gap between prompt-first automation and system-engineered AI — the missing control plane that turns LLMs from chatbots into governed enterprise capabilities. [²] In April I wrote about what the silent default changes in Claude Code revealed about model drift and the necessity of multi-LLM observability. [³] Last week I wrote about why model topology — where the model runs, who controls it, how it is orchestrated — is now more consequential than which model wins the next benchmark. [⁴]

This piece extends that arc one more step. Compliance is a system property, not a model property. And the three Copilot stories from April are the cleanest available demonstration of why that distinction matters.

Story one: the cost layer is not stable

On 27 April, GitHub announced that Copilot is moving to usage-based billing on 1 June 2026. [⁵] Premium request units are being replaced by GitHub AI Credits, consumed against published API rates per token of input, output and cached context. The base plan prices are unchanged — but the fallback experiences that previously kept heavy users productive are being removed, and admins now need to actively manage credit budgets at enterprise, cost-centre and user levels.

The reason GitHub gives is honest, and worth quoting carefully: agentic Copilot usage has become the default, autonomous coding sessions can consume orders of magnitude more inference than chat queries, and "the current premium request model is no longer sustainable." [⁵] In other words: GitHub has been absorbing cost variance that it can no longer absorb.

This is not a story about a price increase. It is a story about pricing visibility and budget predictability for a workload that did not exist eighteen months ago. CFOs and procurement teams who signed Copilot contracts on a per-seat basis are about to discover that "per seat" was a transitional artefact of a market that has now moved on.

Story two: the data flow is not stable

On 17 April, Proton documented that Microsoft enabled flex routing on Microsoft 365 Copilot for new customer accounts created after 25 March 2026 — and is enabling it by default for existing customers unless they opt out. [⁶] Under flex routing, when European data-centre capacity runs short, LLM inference for EU customers may be processed in the US, Canada or Australia.

Microsoft's position is that data remains encrypted in transit and at rest, which is true. But inference is the moment the data is actually decrypted into model context to be processed. Where that processing happens is where regulatory exposure occurs — under GDPR, under NIS2, under DORA, under the EU AI Act. [⁷] The Proton article also includes the practical step-by-step for disabling the setting, which I would encourage every Microsoft 365 administrator reading this to perform as a matter of basic compliance hygiene.

What makes this story structurally important is not the existence of flex routing — capacity routing is a normal cloud-engineering reality. It is that the default was set to "on," and the burden of opting out was placed on the customer. The compliance department of every European Microsoft 365 tenant is now formally responsible for monitoring the configuration drift of a vendor's default settings on a product the vendor controls. That is a fundamentally different procurement posture than most organisations realise they are operating in.

Story three: the vendor's own terms are the strongest disclaimer of all

The third story is the one that ought to settle internal debates more decisively than it has. Microsoft's own Terms of Use for Copilot, updated in October 2025, state plainly: "Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don't rely on Copilot for important advice. Use Copilot at your own risk." [⁸]

It is true that disclaimers like this are partly boilerplate, drafted to limit liability. It is also true that they exist for a reason. When a vendor instructs users in writing not to rely on a tool for important advice, while simultaneously embedding it in Word, Excel, Teams, Outlook and Windows 11, what we have is not a contradiction. It is a statement of where liability actually sits. The vendor is telling you exactly what they will warrant — which is nothing — and what they will not warrant — which is everything else. The product is being sold as a productivity layer for serious business and disclaimed as entertainment in the same year, by the same company, for the same software.

What ties the three stories together

It would be tempting to read these as three separate Microsoft Copilot stories. They are not. They are three views of the same underlying architecture.

Microsoft is operating a product whose economics are not stable enough to keep the original pricing model intact, whose data flows are not stable enough to honour data-residency commitments under load without being defaulted otherwise, and whose legal posture is not stable enough to warrant the use cases the marketing material recommends. Each of those properties is a property of the system Microsoft built — not a property of the underlying language model. None of them is fixed by switching from GPT-4.x to Claude or back. None of them is fixed by the EU Data Boundary alone. They are systemic.

And critically: this pattern is not specific to Microsoft. It is the natural consequence of buying agentic AI from a US-incorporated hyperscaler operating under the CLOUD Act, governed by US contract law, optimising globally for capacity allocation, and absorbing financial variance that turns out to be unabsorbable at scale. Schrems II remains the binding case law on this for European enterprises, and the European Data Protection Supervisor has already formally reprimanded the European Commission itself for non-compliant Microsoft 365 use. [⁹] If the EU Commission cannot operate Microsoft 365 in compliance with its own regulations, the path for everyone else is not magically easier.

The point that matters more than Copilot

Here is the part that I have not seen written down clearly enough in the public discussion, and that follows directly from the architectural arguments in my earlier pieces.

When people debate AI compliance, they almost always debate the model layer. Where is the model hosted? Whose servers does inference run on? Is the foundation model trained on European data? These are real questions, and they matter. But they are no longer the binding constraint. Modern AI workloads are agentic, which means the model does not just generate text — it calls tools. Tools query SAP, hit SharePoint, send emails, fetch web pages, write to databases, trigger workflows. Every tool call is an additional data flow. Every external API call is an additional jurisdiction.

A model hosted in Frankfurt that calls a tool which exfiltrates context to a US-hosted SaaS is not GDPR-compliant. It is a model in Frankfurt that calls a tool which exfiltrates context to a US-hosted SaaS. The legal status of the model layer does not retroactively launder the legal status of the tool layer. And Microsoft itself, in its own published documentation, makes this explicit: Bing search queries from Copilot are out of scope of the EU Data Boundary, and Anthropic models accessed through Copilot are excluded from EU Data Boundary commitments and from in-country processing commitments. [¹⁰] The vendor is not hiding this. The vendor is documenting it. The customer is just not always reading it.

This is what I mean when I say compliance is a system property. The whole flow — user identity, input data, retrieval, model inference, tool calls, external API hits, audit logging, output storage — has to be enforceable as a sovereign chain. And that is not something you bolt onto a hyperscaler-controlled AI product after the fact. It is something the orchestration layer does by construction or it does not happen at all. The three Copilot security incidents in twelve months — EchoLeak (CVE-2025-32711, June 2025), the Reprompt attack (January 2026) and the sensitivity-label bypass disclosed in February 2026 — are not three accidents. They are three independent demonstrations that even within Microsoft's own service boundary, the trust boundary between Copilot and the data it can access is not always enforceable in practice. [¹¹]

Where neuland.ai stands on this

The neuland.ai HUB is built as a sovereign AI management and orchestration platform. We are deliberately hyperscaler-independent: clients run the HUB on the infrastructure their compliance posture requires — on-premises, in EU-located private cloud, or in a hyperscaler region where the workload genuinely allows it. The model layer underneath is Multi-LLM by design: open-weight models we host (GLM-5.1, Mistral Large 3, DeepSeek, Qwen and others) for the workloads where capability is now competitive, proprietary endpoints (Claude, GPT) where the capability gap still justifies the dependency, and the option to swap any of them without rewriting the enterprise integration surface. [¹²]

Crucially, the orchestration layer is where compliance is enforced — not the model layer. RBAC, audit, retention, output policies, tool-call governance, capability abstraction, context discipline: these are properties of the HUB, applied uniformly across whatever model is being routed to and whatever tool is being called. That is what allows neuland.ai to make a guarantee that a hyperscaler-bundled AI assistant cannot make: that the entire data flow, model + tools + retrieval + audit, sits inside a perimeter the customer controls.

This is also the answer to a question I sometimes get from clients: will neuland.ai always be at the absolute model frontier? Honestly, no. If a new frontier model lands on Tuesday, it will not necessarily be available in our HUB on Wednesday. We need time to evaluate it on the workloads that actually matter to our clients, to validate its behaviour against our governance constraints, and to integrate it into the routing layer. Two weeks of due diligence is not a delay we apologise for. It is the diligence that makes the product trustworthy. Speed without sovereignty is not speed; it is risk on a faster timeline.

Personal take

I want to be careful with the framing here. This piece is not anti-Microsoft. It is anti-blind-trust. Copilot is a real productivity tool, M365 is the substrate of European enterprise IT, and most of the clients we work with have a Microsoft estate that is not going anywhere. [¹³] The HUB sits alongside that estate, not against it.

What I am pushing back against is a specific pattern that has become widespread in the European market: vendors who copy the architecture of a US hyperscaler product, run a thin EU-hosted inference endpoint, and call the result "DSGVO-compliant." Compliance is not a sticker. It is a property of the entire data flow, end to end, including the tool layer that enterprises tend to forget exists until something leaks. The day a tool call inside one of those products exfiltrates regulated data to a US-hosted dependency, the compliance argument collapses — and it collapses retroactively, across every interaction the system has ever processed.

The August 2026 EU AI Act enforcement deadline is now just over three months away. [¹⁴] The question every European CIO and CISO should be asking is not "is our model EU-hosted?" but "is our entire AI system — model, tools, retrieval, audit — enforceable as a sovereign chain, and do we control the topology end-to-end?" If the honest answer is no, the architecture decisions of Q2 2026 are the most consequential ones their organisation will make this year.

We at neuland.ai would rather be safe than sorry. If that means a model lands in our HUB two weeks after it lands somewhere else, fine. We have done the due diligence. The customer can sleep at night.

¹ Series articles available at neuland.ai/insights.

² "Control Panels, Execution Surfaces und das Ende der Prompt-First-Automatisierung", neuland.ai, 19 February 2026.

³ "Wenn KI-Systeme plötzlich schlechter werden: Was Unternehmen aus der Claude-Code-Debatte wirklich lernen sollten", neuland.ai, 17 April 2026.

⁴ "Open weights took the top spot. Meta walked away. The real question is where these models actually run.", neuland.ai, April 2026.

⁵ Mario Rodriguez (Chief Product Officer, GitHub), "GitHub Copilot is moving to usage-based billing", GitHub Blog, 27 April 2026. Effective 1 June 2026; premium request units replaced by GitHub AI Credits; fallback experiences removed; admin budget controls introduced.

⁶ Alanna Alexander, "What Microsoft 365 Copilot flex routing means for EU businesses — an explainer", Proton Business Blog, 17 April 2026. Flex routing enabled by default for new accounts created after 25 March 2026; opt-out via Microsoft 365 admin centre → Copilot → Settings → "Flexible inferencing during peak load periods".

⁷ Regulation (EU) 2024/1689 (AI Act), Regulation (EU) 2016/679 (GDPR), Directive (EU) 2022/2555 (NIS2), Regulation (EU) 2022/2554 (DORA).

⁸ Microsoft Copilot Terms of Use, updated October 2025, as reported by Jowi Morales, "Microsoft says Copilot is for entertainment purposes only, not serious use", Tom's Hardware, 3 April 2026.

⁹ European Data Protection Supervisor decision, March 2024: European Commission ordered to suspend non-compliant Microsoft 365 data flows and demonstrate compliance by 9 December 2024. Schrems II (CJEU C-311/18, 16 July 2020) remains the binding case law on EU-to-US transfers; CLOUD Act (US, 2018) creates extraterritorial access obligations for US-incorporated providers regardless of physical data location.

¹⁰ Microsoft Learn: "Enterprise data protection in Microsoft 365 Copilot and Microsoft 365 Copilot Chat" — explicit footnote: "The EU Data Boundary doesn't apply to web search queries. In addition, Anthropic models are currently excluded from the EU Data Boundary and when applicable, in-country processing commitments." Microsoft Learn: "Microsoft 365 Copilot Chat Privacy and Protections" — "calls to the LLM are routed to the closest data centers in the region, but can also call into other regions where greater capacity is available when utilization is especially high."

¹¹ Aim Security, "EchoLeak" (CVE-2025-32711, CVSS 9.3), zero-click prompt injection vulnerability in Microsoft 365 Copilot, disclosed June 2025, patched. Varonis Threat Labs, "Reprompt" attack on Microsoft Copilot, January 2026 — silent exfiltration after chat session closes. Sensitivity-label bypass (CW1226324), February 2026 — Copilot summarised emails carrying confidential labels despite Microsoft Purview controls; reported by VentureBeat.

¹² Self-hosted open-weight model options as of April 2026: GLM-5.1 (Z.ai, MIT), DeepSeek-V3.2 (MIT), Qwen 3.6 (Apache 2.0), Mistral Large 3 (Apache 2.0). Proprietary endpoints integrated where capability gap justifies dependency.

¹³ Microsoft 365 reported deployed across approximately 400 million paid commercial seats globally as of early 2026.

¹⁴ See footnote 7. EU AI Act enforcement powers and high-risk obligations enter into application 2 August 2026.