AI agents have moved beyond experimentation. According to Microsoft, organizations are already running an average of 14 custom AI applications in production. Azure AI Foundry alone processed 100 trillion tokens last quarter and manages 2 billion enterprise search queries daily. At this scale, security vulnerabilities can grow rapidly: Gartner forecasts that by the end of 2025, over 70% of malicious attacks on enterprise AI will originate from compromised supply chains. * Detecting issues early significantly reduces costs—from approximately USD 4.9 million to just USD 80.
(*: Source)
Copilot Agents: Trusted tools for technical teams
The adoption of low-code tools like Copilot Studio and pro-code platforms such as Azure AI Foundry has led to “agent sprawl” in many organizations—numerous non-human entities lacking centralized governance. Microsoft addresses this challenge with Entra Agent ID, which assigns a managed identity to every agent at the time of publication. This allows security teams to track agents, assess their permissions, and enforce policies like conditional access and least privilege through the same console used for managing human identities. Integrations with ServiceNow and Workday will further extend this governance into HR and IT workflows.
Securing the agent runtime environment
Two Preview features now embed robust safeguards directly into projects in AI Foundry and Copilot Studio:
- Spotlighting identifies hidden prompt injections in emails, SharePoint files, or other content sources before agents respond to them.
- Task Adherence Evaluation + Mitigation analyzes every intended AI tool invocation in real time; if an agent strays from expected behavior, Foundry can automatically halt execution, pause the session, or escalate for manual review.
Azure Monitor’s continuous evaluation dashboards track groundedness, safety, and cost efficiency post-deployment, offering a unified view akin to traditional microservices observability.
DevSecOps-driven integration
Security and development processes come together within the Foundry portal via Microsoft Defender for Cloud. Engineers can now view security recommendations (e.g., “apply Private Link for sensitive data channels”) and real-time threat alerts (e.g., jailbreak attempts, data leaks) directly in their development environment—reducing delays and speeding up incident response. Full availability is scheduled for June 2025, in line with Microsoft’s roadmap.
Persistent data protection for AI agents
Purview DSPM for AI extends enterprise-grade data loss prevention and audit capabilities to all agents built in Foundry or Copilot Studio, including those using third-party models. Automated sensitivity labeling across Dataverse ensures consistent policy enforcement from end to end, while new oversight tools give compliance teams visibility into unauthenticated customer conversations with public-facing assistants.
AI Foundry risk evaluation via Microsoft Purview Compliance Manager
AI regulations are evolving to emphasize transparency, thorough documentation, and robust risk management—particularly for high-risk AI systems. Developers creating AI agents may need tools and guidance to evaluate compliance risks and clearly share assessments with their governance teams.
For example, developers in Europe might be required to complete a Data Protection Impact Assessment (DPIA) and an Algorithmic Impact Assessment (AIA) to meet internal risk management requirements and align with emerging AI governance standards.
Purview Compliance Manager provides structured, step-by-step guidance for implementing and validating security controls. This enables compliance teams to identify risks—such as bias, cyber threats, or unclear model behavior—early in the development lifecycle.

Picture 1. EU AI Assessment report for Azure AI Foundry in Compliance Manager (Image credit: Microsoft)
After running an evaluation in Azure AI Foundry, developers can generate reports outlining identified risks, mitigation strategies, and any residual risks. These reports can be uploaded to Compliance Manager to support audit processes and demonstrate due diligence to regulators or external reviewers.
Security Copilot: The SecOps companion
All these integrated signals feed into Microsoft Security Copilot, which is quickly becoming the command hub for AI-powered security operations:
- Copilot Studio Connector (GA in May 2025) enables security analysts to initiate investigations directly from Studio workflows using natural language and receive the findings inline.
- An expanding ecosystem of plugins—including Censys threat intelligence, HP Workforce telemetry, Splunk queries, and Azure Firewall diagnostics—allows Security Copilot to analyze diverse data sources and generate actionable summaries using plain language.
By consolidating signals from Entra, Defender, and Purview, Security Copilot empowers security teams with the same efficiency gains Copilot brings to business users.
Summary
Enterprises are eager to harness autonomous agents—without inheriting excessive risk. Microsoft’s end-to-end strategy—managed identities for bots, built-in runtime protections, unified monitoring, and generative security tools—helps embed security throughout the development and deployment lifecycle. Development stays fast, security remains informed, and compliance becomes demonstrable.
Next steps for practitioners
- Inventory agents: Activate Entra Agent ID and review all existing non-human identities.
- Pilot protections: Enable Spotlighting and Task Adherence in a test environment; monitor for false positives before wider rollout.
- Integrate Defender and Purview: Connect these tools to Foundry projects to identify misconfigurations early.
- Try Security Copilot Studio connector: Create prompt templates to trigger investigations when Foundry logs potential prompt injection attempts.
Share this post:
Leave a Comment