Implement security for AI

AI workloads introduce new attack surfaces across identity, data, and runtime layers that traditional security controls don't fully address. In this learning path, you implement layered AI security controls across the Microsoft security platform.

You start by discovering and assessing AI data risks using Microsoft Purview Data Security Posture Management (DSPM). Next you secure agent identities using Microsoft Entra Agent ID and Conditional Access, and analyze AI identity blast radius and attack paths in Microsoft Defender XDR. From there, you configure real-time runtime protection for Copilot Studio agents using Microsoft Defender for Cloud Apps, and secure AI model traffic using AI Gateway in Microsoft Foundry. Finally, you configure guardrails in Microsoft Foundry, protect AI workloads using Microsoft Defender for Cloud, and govern deployed agents using Microsoft Agent 365.

Prerequisites

  • Working knowledge of Microsoft Entra ID and Azure identity concepts
  • Familiarity with Microsoft Defender portal and Microsoft Purview portal navigation
  • Experience managing cloud security configurations in Azure
  • Awareness of AI agent concepts and Microsoft Copilot technologies

Get started with Azure

Choose the Azure account that's right for you. Pay as you go or try Azure free for up to 30 days. Sign up.

Modules in this learning path

Apply Conditional Access controls to AI agent identities in Microsoft Entra Agent Identity. Map how agents authenticate, configure policies that enforce access conditions, and manage the agent identity lifecycle to reduce risk from compromised or over-privileged agents.

Use Microsoft Defender XDR to discover AI agents operating in your environment, assess the blast radius of each agent identity, and analyze attack paths that could lead to unauthorized data or resource access.

Configure Microsoft Defender for Cloud Apps to provide runtime protection for Copilot Studio agents. Enable protection in the Defender portal Settings for AI, coordinate with Power Platform admins for App ID configuration, and verify that agent inventory, alerts, and Advanced Hunting data appear in Microsoft Defender XDR.

Use AI Gateway in Microsoft Foundry to secure and govern AI model traffic. Examine the gateway architecture, create and configure a gateway instance with security controls, and apply access restrictions and monitoring to enforce policy and detect misuse.

Microsoft Foundry guardrails help secure AI workloads by applying configurable safety controls that evaluate both prompts and responses. You'll learn how to understand built-in safety models, test and refine guardrails, create blocklists, configure content filters, and validate that protections work as intended. These capabilities help organizations prevent unsafe or policy-violating interactions, protect sensitive data, and maintain trust in AI-assisted applications.

Microsoft Defender for Cloud helps secure AI workloads by combining discovery, posture management, and runtime protection in one platform. You'll learn how to enable the AI workloads plan, review insights in the Data & AI security dashboard, assess posture using Cloud Security Posture Management (CSPM), detect runtime threats with Cloud Workload Protection (CWP), and investigate incidents in Microsoft Defender XDR. These capabilities work together to identify configuration gaps, detect suspicious behavior, and provide end-to-end visibility across your AI environments.

Enable and configure the Defender for AI Services plan in Microsoft Defender for Cloud to detect threats targeting Azure AI services workloads. Then configure plan components, and monitor AI security posture using the Data and AI security dashboard.

Use Microsoft Agent 365 to govern AI agents in your Microsoft 365 environment. Enable the Agent 365 management interface, register agents and apply access controls, and monitor agent activity and usage to enforce your organization's AI governance policies.