The EU AI Act and Microsoft 365: What Compliance Teams Need to Know
The EU AI Act entered into force in August 2024, with phased enforcement beginning February 2025 for prohibited practices and full applicability for high-risk systems by August 2026. For organisations using Microsoft 365 with AI features - particularly Microsoft Copilot, the compliance implications are significant and frequently misunderstood.
Understanding the Risk Classification System
The AI Act establishes four risk tiers: unacceptable risk (prohibited), high risk, limited risk (transparency obligations), and minimal risk. The classification depends not on the technology itself but on its intended purpose and the context of deployment.
Unacceptable risk (prohibited): AI systems that manipulate human behaviour to circumvent free will, social scoring by public authorities, and real-time biometric identification in public spaces (with narrow exceptions). No standard M365 feature falls into this category, but custom Copilot agents built using Copilot Studio could theoretically cross this line if designed to manipulate employee behaviour through subliminal techniques.
High risk: AI systems used in employment contexts - specifically recruitment, performance evaluation, task allocation, and termination decisions. This is where M365 becomes directly relevant. If your organisation uses Viva Insights data to inform performance reviews, or uses Copilot-generated summaries to evaluate employee output, you are operating a high-risk AI system under Article 6(2) and Annex III. Microsoft Viva's productivity scores, if used for individual performance assessment, would similarly qualify.
Limited risk (transparency obligations): AI systems that interact with natural persons must disclose that the person is interacting with AI. Microsoft Copilot in Teams, Outlook, and Word falls squarely here. Users must be informed when content has been generated or substantially modified by Copilot.
Minimal risk: Most standard M365 automation - mail flow rules, spam filtering, auto-complete in Outlook - falls into minimal risk and requires no specific compliance action under the AI Act.
Copilot Transparency Requirements
Article 50 of the AI Act imposes transparency obligations on deployers of AI systems that interact with natural persons. For Microsoft Copilot, this means:
-
Disclosure of AI interaction: When Copilot generates or substantially modifies content that is shared externally (emails, documents, presentations), the recipient should be made aware. Microsoft provides Copilot attribution badges in some contexts, but these are not universally applied. Organisations should implement a policy requiring staff to disclose AI-assisted content in client-facing communications.
-
AI-generated content labelling: Under Article 50(2), synthetic content must be marked as artificially generated. Copilot-generated images in PowerPoint or Designer should carry metadata indicating AI generation. Microsoft is progressively adding Content Credentials (C2PA) metadata, but organisations must verify this is active in their tenant.
-
Deep fake protections: If Copilot or any M365 tool is used to generate synthetic audio or video (e.g., via Clipchamp AI narration), Article 50(4) requires clear labelling. This applies even for internal training content.
Data Governance Obligations for High-Risk Deployments
If your use of M365 AI features qualifies as high-risk (employment decisions, as discussed above), Articles 9-15 impose substantial requirements:
Data governance (Article 10): Training, validation, and testing datasets must be relevant, representative, and free from bias. For organisations using Copilot with Semantic Index, this means the underlying SharePoint and Exchange data that Copilot draws upon must be governed. Stale, inaccurate, or biased data in your tenant directly affects Copilot outputs. Deploy Purview Data Lifecycle Management to ensure data currency and accuracy.
Technical documentation (Article 11): You must maintain documentation describing the AI system's intended purpose, design, development, and performance metrics. For Copilot, this means documenting:
- Which Copilot features are enabled and for which user groups
- The scope of data accessible to Copilot (Semantic Index boundaries)
- Any custom Copilot agents built in Copilot Studio, including their data sources and intended use
- Performance monitoring metrics and known limitations
Record-keeping (Article 12): High-risk AI systems must generate logs. Microsoft provides Copilot interaction logs via the Microsoft 365 audit log (unified audit log in Purview). Ensure these are retained for the period specified by your regulatory framework, the AI Act requires logs to be kept for a period appropriate to the intended purpose, which for employment decisions should align with employment tribunal limitation periods (typically 6 months to 6 years depending on jurisdiction).
Human oversight (Article 14): High-risk AI systems must be designed to allow effective human oversight. For Copilot in HR contexts, this means:
- Copilot outputs must never be the sole basis for employment decisions
- A qualified human must review and validate any AI-generated assessment
- Staff must be trained to understand Copilot's limitations and potential for hallucination
- An escalation path must exist for employees to challenge AI-influenced decisions
Mapping M365 Features to AI Act Obligations
| M365 Feature | AI Act Risk Level | Key Obligation | |---|---|---| | Copilot in Word/Outlook/PPT | Limited risk | Transparency disclosure | | Copilot in Teams (meeting summaries) | Limited risk | Transparency, data accuracy | | Viva Insights (individual) | High risk if used for performance | Full Article 9-15 compliance | | Copilot Studio agents | Depends on purpose | Risk assessment required | | Defender AI-based threat detection | Minimal risk | None specific | | Purview auto-classification | Minimal risk | None specific |
Timeline and Practical Steps
The phased enforcement means organisations must act now:
- February 2025: Prohibited practices enforcement began. Audit any custom AI agents for prohibited use cases.
- August 2025: Governance and general provisions apply. Establish an AI governance framework, appoint responsible persons, begin documentation.
- August 2026: Full enforcement for high-risk systems. All technical documentation, monitoring, and human oversight mechanisms must be operational.
Immediate actions for M365 tenants:
- Conduct an AI inventory across your M365 tenant - identify every feature using AI, including Copilot, Viva, Defender, and Purview
- Classify each against the AI Act risk tiers
- For high-risk deployments, begin Article 9-15 documentation immediately
- Configure Purview audit log retention for Copilot interactions to meet record-keeping requirements
- Implement transparency policies for AI-generated content in external communications
- Brief your Data Protection Officer, the AI Act intersects heavily with GDPR, and your DPO must be involved in AI governance
The AI Act is not a distant regulatory concern. For any organisation operating M365 Copilot within the EU or processing EU citizens' data, compliance planning must begin immediately.