Edit

Share via


Enable threat protection for AI services

Threat protection for AI services in Microsoft Defender for Cloud protects AI services on an Azure subscription by providing insights to threats that might affect your generative AI applications.

Prerequisites

Enable threat protection for AI services

Enable threat protection for AI services.

  1. Sign in to the Azure portal.

  2. Search for and select Microsoft Defender for Cloud.

  3. In the Defender for Cloud menu, select Environment settings.

  4. Select the relevant Azure subscription.

  5. On the Defender plans page, toggle the AI services to On.

    Screenshot that shows you how to toggle threat protection for AI services to on.

Enable user prompt evidence

With the AI services threat protection plan enabled, you can control whether alerts include suspicious segments directly from your user's prompts, or the model responses from your AI applications or resources. Enabling user prompt evidence helps you triage, classify alerts and your user's intentions.

User prompt evidence consists of prompts and model responses. Both are considered your data. Evidence is available through the Azure portal, Defender portal, and any attached partners integrations.

  1. Sign in to the Azure portal.

  2. Search for and select Microsoft Defender for Cloud.

  3. In the Defender for Cloud menu, select Environment settings.

  4. Select the relevant Azure subscription.

  5. Locate AI services and select Settings.

    Screenshot that shows where the settings button is located on the Plans screen.

  6. Toggle Enable user prompt evidence to On.

    Screenshot that shows you how to toggle user prompt evidence to on.

  7. Select Continue.

Enable Data Security for Azure AI with Microsoft Purview

Note

This feature requires a Microsoft Purview license, which isn't included with Microsoft Defender for Cloud's Defender for AI Services plan.

To get started with Microsoft Purview DSPM for AI, see Set up Microsoft Purview DSPM for AI.

Enable Microsoft Purview to access, process, and store prompt and response data—including associated metadata—from Azure AI Services. This integration supports key data security and compliance scenarios such as:

  • Sensitive information type (SIT) classification

  • Analytics and Reporting through Microsoft Purview DSPM for AI

  • Insider Risk Management

  • Communication Compliance

  • Microsoft Purview Audit

  • Data Lifecycle Management

  • eDiscovery

This capability helps your organization manage and monitor AI-generated data in alignment with enterprise policies and regulatory requirements.

Note

Data security for Azure AI Services interactions is supported only for API calls that use Microsoft Entra ID authentication with a user-context token, or for API calls that explicitly include user context. To learn more, see Gain end-user context for Azure AI API calls.

  1. Sign in to the Azure portal.

  2. Search for and select Microsoft Defender for Cloud.

  3. In the Defender for Cloud menu, select Environment settings.

  4. Select the relevant Azure subscription.

  5. Locate AI services and select Settings.

  6. Toggle Enable data security for AI interactions to On.  

    Screenshot that shows where the toggle is located for AI interactions is located.

  7. Select Continue.