Context: I held a talk with the same topic in CloudBrew 2025 in Belgium, about this subject and now I am more than happy to share out the key points from the presentation.
The arrival of the EU AI Act has sparked a wide range of reactions across the tech industry – from cautious excitement to outright skepticism. But for those of us operating in the Azure ecosystem, the conversation is shifting from “What is it?” to “How do we actually secure it?”
If you’re managing AI services today, you likely fall into the category of a Deployer – someone using AI systems within their operations. While the Providers (like Microsoft, OpenAI etc.) have their own set of heavy lifting done by August 2025, the responsibility for safe and compliant deployment rests squarely on our shoulders.
Here is how you can move past the “boring stuff” and build a robust, compliant AI security posture in Azure.
1. The Risk-based mindset
The Act utilizes a risk based framework that categorizes AI systems into four distinct levels:
- Unacceptable Risk: These systems violate fundamental EU values and are strictly prohibited.
- High Risk: This category includes systems impacting health, safety, or fundamental rights, requiring rigorous conformity assessments and continuous monitoring.
- Limited Risk: This covers systems like chatbots and GenAI tools, which carry specific transparency and information obligations.
- Minimal Risk: This includes common applications such as spam filters where no specific new regulations apply.
2. Tightening the “Plumbing” (Network & Identity)
Security starts with the fundamentals. For any AI service, you should move away from public access:
- Disable Public Network Access: Use Private Endpoints (PEs) to ensure your AI traffic stays off the open web.
- Managed Identities: Stop using keys where possible. Leverage Managed Identities for secure, credential-less authentication.
3. Making governance “continuous”
Article 9 of the Act emphasizes that risk management isn’t a “one-and-done” task – it’s an iterative lifecycle.
- Safety Evaluations: Regularly run Azure AI Safety Evaluations to identify potential jailbreak attempts or vulnerabilities.
- Red Teaming: Document your Red Team exercises (you can even use the AI Red Teaming Agent in Preview) before and after major changes to your system.
- Data Lineage: Use Microsoft Purview to register data sources, manage lineage, and apply sensitivity labels to prevent sensitive training data exposure.
4. Technical Documentation & Record Keeping
Articles 11 and 12 of the Act emphasizes the importance of up-to-date technical documentation and it should be created in a way that it can be used as evidence for compliance.
- Model cards & Transparency notes: Use Azure AI Foundry to keep model cards and transparency notes with your system documentation
- Prompt flow versioning: Use Prompt Flow versioning
- Logging & Monitoring: Enable Diagnostic Settings + tracing to App Insights and Log Analytics
- Compliance Management: Use Defender for Cloud to export compliance reports against the EU AI Act standard
- Data Ownership: Make sure, that Data set owners and classifications are up-to-date and regularly updated in Purview
- Audit log retention: Audit logs should be retained in Purview as well
5. Transparency is your best defense
Articles 13 and 14 focus on making AI systems interpretable and ensuring human oversight.
- Groundedness Detection: Use AI Content Safety to detect “hallucinations” and back up user-facing disclaimers about source limitations.
- The “Human in the Loop”: Ensure content safety flags are routed to human reviewers and that there is clear, documented ownership of AI development practices.
6. Accuracy & Robustness
Article 15 focuses on accuracy of the AI systems and how they will perform consistently throughout their operating lifecycle.
- Security Posture Management: Analyze security posture with Defender for Cloud CSPM
- Security Testing: Implement regular safety evaluations and adversary tests
- Virtual Machine hardening: Implement optional hardening with Confidential VMs (AMD SEV-SNP / Intel TDX) for sensitive scenarios
- Stay Updated: Create, update and communicate policy assignments & compliance trends.
7. AI Compliance at the speed of cloud
Compliance doesn’t have to be a manual spreadsheet nightmare. Microsoft Defender for Cloud now includes a specific EU AI Act regulatory standard. This allows you to track your posture in real-time and export compliance reports directly for auditors or leadership.
Final Thoughts
The EU AI Act is more than just a hurdle; it’s a framework for building trust. By integrating tools like Azure AI Foundry, Purview, and Defender, we can ensure our AI initiatives are not only innovative but also resilient and respectful of fundamental rights.
The deadline for many of these obligations is approaching quickly – the transition period will end in 2nd of August, 2027. It will be here before we know it. Now is the time to start your “90-second tour” of Azure controls and turn regulation into a competitive security advantage.
Security is no longer a secondary consideration; it is the primary enabler of AI innovation. By aligning your technical architecture with these legal requirements now, you ensure that your organization remains both competitive and compliant in the years to come.
Share this post:


Leave a Comment