European Unionai-governance

EU Artificial Intelligence Act

The EU AI Act imposes fines up to 7% of global turnover for non-compliant AI deployment—making ungoverned Copilot rollout a material commercial risk.

Mapped to Microsoft controls
Effective Date1 August 2024 (phased enforcement through 2027)
Enforcement BodyEuropean AI Office and national market surveillance authorities
Penalty FrameworkTiered penalty structure based on severity: up to EUR 35 million or 7% of global annual turnover for prohibited AI practices; up to EUR 15 million or 3% of turnover for violations of AI system requirements; up to EUR 7.5 million or 1.5% of turnover for supplying incorrect information to authorities. SMEs and startups receive proportionally lower fines. The European AI Office has direct enforcement power over general-purpose AI models, while national authorities enforce requirements for deployers and providers of AI systems.

The EU Artificial Intelligence Act (EU) 2024/1689 imposes a comprehensive, risk-based legal framework on enterprise AI deployment. Systems are classified into prohibited, high, limited, and minimal risk tiers, with severe enforcement mechanisms phased in through 2026.

For enterprises leveraging Microsoft 365 Copilot, the Act mandates absolute data governance, algorithmic transparency, and documented human oversight. Without proper containment, Copilot’s deep integration into enterprise data can inadvertently elevate risk classifications.

To safely enable generative AI, we establish strict semantic boundaries within your tenant. We implement Purview Information Barriers, enforce Semantic Index exclusions, and apply rigid Sensitivity Labels—ensuring Copilot cannot access or expose restricted corporate IP while maintaining fully auditable human oversight.

Why This Matters Now

The EU AI Act is the world's most aggressive and comprehensive AI legislation, fundamentally altering how enterprises deploy tools like Microsoft 365 Copilot. Because Copilot indexes the Microsoft Graph, its deployment in HR, legal, or essential service contexts triggers strict governance requirements. Deploying Copilot without engineering rigid semantic boundaries, data access controls, and transparent human-oversight mechanisms exposes firms to fines that surpass even the GDPR (up to 7% of global turnover).

Scope & Applicability

The EU AI Act applies to: (1) providers of AI systems placed on the EU market or put into service in the EU; (2) deployers of AI systems located within the EU; (3) providers and deployers outside the EU where the output of the AI system is used in the EU. For M365 environments, Microsoft is the provider of Copilot, but the deploying organisation bears responsibility for: conducting fundamental rights impact assessments for high-risk uses, implementing human oversight measures, ensuring transparency to affected individuals, maintaining logs and documentation, and governing data access permissions that Copilot inherits. Any organisation using Copilot for HR decisions, performance management, or customer-facing services must assess high-risk classification.

Core Obligations

01
Articles 5–7, Annex III

AI System Classification

Classify AI systems by risk level. Prohibited practices include social scoring and real-time biometric identification. High-risk systems in Annex III areas require conformity assessments.

02
Articles 50, 52

Transparency Requirements

Ensure AI-generated content is identifiable. Inform users when they interact with AI systems. Disclose training data summaries for general-purpose AI models.

03
Article 14

Human Oversight

High-risk AI systems must be designed to allow effective human oversight. Users must be able to understand, monitor, and intervene in AI system outputs.

04
Article 10

Data Governance

Training, validation, and testing datasets must meet quality criteria. Implement data governance practices addressing relevance, representativeness, accuracy, and completeness.

05
Articles 11–12

Documentation and Record-Keeping

Maintain technical documentation demonstrating compliance. Implement automatic logging capabilities for high-risk AI systems to ensure traceability.

06
Article 4

AI Literacy

Providers and deployers must ensure staff have sufficient AI literacy, taking into account their technical knowledge, experience, education, and context of AI use.

Microsoft 365 Control Mapping

How each obligation maps to enforceable Microsoft 365 controls and the evidence they produce.

Obligation

Article 10 - Data Governance

M365 Control

Purview Sensitivity Labels preventing labelled data from being processed by AI features. Semantic Index exclusions removing sensitive SharePoint sites from Copilot's data scope. Information Barriers isolating data domains.

Evidence

Sensitivity Label exclusion configuration, Semantic Index scope report, Information Barrier policy status.

Obligation

Article 14 - Human Oversight

M365 Control

Copilot response attribution showing source documents. Restricted SharePoint site access controlling Copilot's knowledge base. Admin controls for Copilot feature enablement per user group.

Evidence

Copilot feature assignment report, SharePoint permissions audit, Copilot interaction logs.

Obligation

Article 50 - Transparency

M365 Control

Purview Communication Compliance monitoring AI-assisted communications. Copilot usage analytics tracked via Microsoft 365 usage reports. AI system inventory maintained in compliance documentation.

Evidence

Copilot adoption reports, communication compliance review logs, AI system register.

Obligation

Article 5 - Prohibited Practices

M365 Control

Defender for Cloud Apps policies blocking access to AI tools that perform prohibited practices (social scoring, emotion recognition in workplace). App governance policies preventing OAuth consent to unvetted AI services.

Evidence

Blocked AI app inventory, OAuth consent policy logs, Defender for Cloud Apps discovery report.

Implementation Timeline

April 2021
European Commission publishes AI Act legislative proposal
March 2024
EU AI Act approved by European Parliament
August 2024
AI Act enters into force
February 2025
Prohibited AI practices provisions become applicable
August 2025
General-purpose AI model obligations become applicable
August 2026
Full applicability of all AI Act provisions including high-risk AI system requirements

Related Frameworks

Ready to get EU AI Act-ready?

Start with a fixed-scope sprint. We assess your Microsoft 365 controls against EU AI Act requirements, close gaps, and produce audit-ready evidence.