Detect and manage risky interactions in AI apps

Completed

Communication Compliance in Microsoft Purview can help your organization detect and respond to inappropriate or risky interactions with generative AI tools. This includes prompts and responses entered into Microsoft 365 Copilot, Copilots built with Microsoft Copilot Studio, and other connected AI apps. If you're responsible for overseeing responsible AI use or protecting sensitive information, you can use policies to identify potential risks and review flagged interactions.

What can be detected

Communication Compliance supports detection across several types of AI experiences:

  • Microsoft Copilot experiences: Includes prompts and responses in Microsoft 365 Copilot, a built-in AI assistant integrated into Microsoft 365 apps like Teams, Outlook, and Word. These interactions can occur across the following apps:

    • Copilot in Excel
    • Copilot in Forms
    • Copilot in Loop
    • Microsoft 365 Chat (in Bing and Teams)
    • Copilot in OneNote
    • Copilot in Outlook
    • Copilot in Planner
    • Copilot in PowerPoint
    • Copilot in Stream
    • Copilot in Teams (chats, channels, and meetings)
    • Copilot in Word
    • Copilot in Whiteboard
  • Enterprise AI apps: Covers Copilot Studio, Microsoft Security Copilot, Copilot in Fabric, and other AI tools connected to your organization through Microsoft Entra or Microsoft Purview Data Map.

  • Other AI apps: Includes browser-based or non-Microsoft AI tools accessed by users in your organization.

To detect non-Microsoft 365 Copilot interactions (Enterprise AI apps and Other AI apps), your organization must enable pay-as-you-go billing.

When a policy detects a prompt or response entered into a generative AI tool, it appears in the Pending tab just like any other communication compliance policy match. Here's how you can recognize AI-related items:

  • The Subject column shows [Copilot] for Microsoft Copilot matches or [AI app] for others.
  • The Sender is listed as Copilot, Connected AI app, or Cloud AI app, depending on the source.
  • The Recipient is the user who entered the prompt or received the response.
  • The full message text appears, showing what triggered the match.

Configure policies to detect AI interactions

Communication compliance policies can be configured to detect prompts and responses in Copilot and non-Copilot AI apps. You can start with a built-in template or add AI locations to any policy.

Use the Microsoft Copilot policy template

To get started quickly, use the built-in Detect Microsoft Copilot interactions template:

  1. Go to the Communication Compliance solution in the Microsoft Purview portal.

  2. Select Policies, then Create policy.

  3. Choose Detect Microsoft Copilot interactions.

  4. Add users and reviewers, then choose Create policy or Customize policy to adjust the settings.

    Screenshot showing the Detect Microsoft Copilot interactions policy template.

Add AI locations to a policy

You can include AI activity in any communication compliance policy, whether you're creating a new one or editing an existing one.

To add AI coverage:

  1. In the policy creation or edit workflow, go to the Choose locations to detect communications step.

  2. Under the Generative AI section, select one or more AI sources:

    • Microsoft Copilot experiences – Built-in and custom Copilot experiences
    • Enterprise AI apps – Non-Copilot AI apps connected to your organization using Microsoft Entra or data connectors
    • Other AI apps – Apps detected through browser activity, categorized as "Generative AI" in Microsoft Defender for Cloud Apps
  3. Continue through the workflow and select Save when you're done.

    Screenshot showing Generative AI options in a Communication Compliance policy.

Note

Pay-as-you-go billing is required to detect activity from Enterprise AI apps and Other AI apps. Microsoft Copilot experiences are excluded from billing.

Review all AI interactions

To understand how people in your organization are using generative AI tools, you can start with a broad policy:

  • Include all AI locations (Copilot experiences, Enterprise AI apps, Other AI apps)
  • Set the Review percentage to 100%
  • Leave all conditions blank

Note

This approach might result in a high number of detected messages, especially in large organizations. You can refine the policy later by adding conditions or reducing the review percentage.

Investigate and remediate AI matches

You can review and take action on AI-related messages the same way you would for any policy match:

  • Tag as compliant, noncompliant, or questionable
  • Notify the user
  • Escalate for further review
  • Resolve or mark as misclassified
  • Download or export message details
  • Create an eDiscovery case if further investigation is needed

Tip

AI matches are also included in communication compliance reports and audits. For more, see Use communication compliance reports and audits.

Permissions required

To view and manage AI matches, users need:

  • A role assignment to Communication Compliance, Investigators, or Analysts
  • Reviewer permissions in the policy

Including AI interactions in your communication compliance policies can help you identify and address potential risks associated with generative AI tools. Whether you're concerned about sensitive data being entered into prompts or evaluating how AI tools are used in regulated environments, these policy options give you a way to bring AI activity into scope. You can start broad and refine your approach as you learn more about the types of interactions that might require attention.