Secure data for AI apps using Data Security Posture Management (DSPM) for AI
As organizations adopt AI, the risk of data oversharing, where sensitive information is shared beyond its intended audience, and data leakage are the top concerns for security leaders. The Microsoft Purview Data Security Posture Management (DSPM) for AI, formerly known as Microsoft Purview AI Hub, is designed to help alleviate these concerns.
DSPM for AI offers clear visibility into where and how data oversharing occurs by helping IT and security teams proactively discover and manage data risks, such as sensitive data in user prompts. DSPM for AI then provides recommended actions and insights during incidents, proactively fostering a more secure and compliant AI environment.
The following screenshot displays the Data Security Posture Management for AI dashboard within the Microsoft Purview portal. Within this dashboard, admins can discover data risks in Microsoft 365 Copilot and other AI applications, view recommendations to prevent oversharing, and track Microsoft 365 Copilot interactions over time.

Strengthening data security and minimizing exposure to data risks for sensitive information is often challenging for organizations in the modern workplace. The increasing complexity of data, the variety of data sources and platforms, limited visibility into sensitive data, and fragmented security solutions can pose significant challenges for administrators and data security professionals. Adding multicloud platforms and generative AI applications to these areas makes it even more difficult to assess data security coverage and to correlate insights from user data points.
To help discover and mitigate data risks, organizations must address the following questions:
- What is my sensitive data?
- Where is it located?
- What data is currently unprotected?
- How is unprotected sensitive data being handled and accessed?
- How can I help lower the risk and help secure unprotected sensitive data?
Microsoft Purview Data Security Posture Management (DSPM) enables organizations to quickly and easily monitor cross-cloud data and user risk through dynamic reports and trend analysis. DSPM for AI processes and correlates across other Microsoft Purview data security and risk and compliance solutions. In doing so, it helps you identify vulnerabilities with unprotected data and quickly take action to improve your data security posture and minimize risk. DSPM for AI provides:
- Data security recommendations. Gain insights into your data security posture and get recommendations for creating insider risk management and data loss prevention (DLP) policies to help protect sensitive data and to close data security gaps. For example, some recommendations may include creating policies to prevent users from printing sensitive files or to prevent users from copying sensitive files to other network locations.
- Data security analytic trends and reports. Track your organization's data security posture over time with reports summarizing sensitivity label usage, DLP policy coverage, changes in risky user behavior, and more.
- Microsoft Security Copilot. Use Security Copilot to help you investigate alerts, identify risk patterns, and pinpoint the top data security risks in your organization.
Integration with Microsoft Purview solutions
DSPM processes and correlates data state, signals, and user activities based on the current configuration of other data security and risk and compliance solutions in Microsoft Purview. For optimal coverage and deeper insights, consider using the features and capabilities of the following solutions:
- Data loss prevention (DLP). Microsoft Purview Data Loss Prevention policies help you prevent users from inappropriately sharing sensitive data. DLP detects sensitive information using content analysis that includes keyword matching, evaluations of expressions, internal validation, machine learning algorithms and more.
Depending on specific recommendations identified for the data, you can choose to quickly create an applicable DLP policy directly in the DSPM (preview) workflow.
- Information protection. Microsoft Purview Information Protection provides a framework, process, and capabilities you can use to protect sensitive data across clouds, apps, and devices. Organizations that use sensitivity labels, trainable classifiers, and sensitive information types can define and apply protection policies to sensitive data.
- Insider risk management. Microsoft Purview Insider Risk Management uses the full breadth of built-in service and 3rd-party indicators to help you quickly identify, triage, and act on potentially risky activity by users in your organization. By using logs from Microsoft 365 and Microsoft Graph, insider risk management allows you to define specific policies to identify risk indicators. After identifying the risks, you can take action to mitigate these risks, and if necessary open investigation cases and take appropriate legal action.
Depending on specific recommendations identified for the data, you can choose to quickly create an applicable insider risk management policy directly in the DSPM workflow.
DSPM for AI workflow
The Microsoft Purview DSPM for AI workflow helps you investigate and take action to address potential security concerns with unprotected data across your organization.

The following tasks are outlined in the prior graphic:
- Opt-in to analytics processing. To get started with DSPM for AI, you must enable and opt in to:
- Insider risk management analytics
- DLP analytics
- Analytics processing in DSPM for AI to scan for unprotected data in your organization
- Evaluate insights and take action. After the automated analytics processing is completed, you can evaluate the insights created by DSPM to help mitigate risks for unprotected data.
- Actions. Perform the following actions to help mitigate risks for unprotected data:
- Investigate with Security Copilot. Use built-in and custom prompts with Security Copilot to help identify specific areas of risk.
- Create policies with recommendations. Use recommendations to quickly create insider risk management and DLP policies to help mitigate data security risks for unprotected data assets.
- Track posture with analytic trends and reports. Use analytic trends and reports to view your posture over time and for data locations across your organization.
Simulation - Discover potential data security risks in Microsoft 365 Copilot interactions using DSPM for AI
This simulation displays the Microsoft Purview portal in a browser tab for a fictitious company named Woodgrove Bank. The goal of this simulation is to show you how DSPM for AI can proactively discover data risks and provide recommended actions, which help foster a more secure and compliant AI environment.
Start the simulation: The previous simulation exercises used the Microsoft 365 admin center. This simulation uses the new Microsoft Purview portal. You must run a different simulation application to complete this exercise. Right-click on the following link and select Open link in new tab (so that it doesn't replace this unit on your current browser tab with the simulation): Start the Microsoft Purview simulation.
Note
This exercise utilizes a simulated Microsoft 365 tenant for Woodgrove Bank. For this exercise, you work on the DSPM for AI solution. While other solutions are programmed in the simulation, if you select a page, setting, or feature that isn't included, then the following message is displayed at the top of the screen: This feature is not available within the simulation.
In the Microsoft Purview portal, select Solutions in the navigation pane.
In the Solutions menu that appears, select DSPM for AI.
Note how DSPM for AI appears in the navigation pane, and a new DPSM for AI navigation pane appears next to the Microsoft Purview navigation pane. By default, the Overview tab is selected.
On the Data Security Posture Management for AI Overview page, you should begin by deploying DSPM for AI to secure your AI apps. In the Get Started section, note the activities that are required to deploy DSPM for AI. You should begin by selecting Activate Microsoft Purview Audit.
In the Activate Microsoft Purview Audit pane that appears, review the description of this activity and then select the Activate Purview Audit button.
Once the Microsoft Purview Audit feature is activated (see the Activated response and check mark below the detail pane heading), select the X at the top of the pane to close.
Repeat this process for each of the three remaining tasks in the Get Started section.
In the DSPM for AI navigation pane, select Recommendations.
On the Recommendations page, review the recommendations that are under the Not Started section.
For the sake of time, we selected only a few features that you can enable under the Not Started section for this simulation. Create the following data discovery policies to discover sensitive information in AI interactions. To do so, select each of the following features to open its policy detail pane, review the policies associated with this feature, select the Create policies button, and then select X to close the pane:
- Fortify your data security. Set up protection policies to manage your data security risks with AI apps.
- Detect unethical behavior in AI apps. Set up a policy to detect sensitive information in prompts and responses and address potentially unethical behavior in Microsoft Copilot and ChatGPT for Enterprise across your organization.
Note how these two features appear in the Completed section on the Recommendations page. Feel free to select any of the other features in the Not Started section to see what they entail. However, they aren't programmed in this simulation, so an error occurs if you attempt to complete the action associated with their detail pane. Some of these features create policies, just as you did in the prior step. Others suggest tasks you should complete strengthening your security controls. Once you complete the tasks in that feature, you can mark it as Completed to show that it's deployed.
In the DSPM for AI navigation pane, select Policies.
Once you create the policies from the Recommendations page, you can navigate to this Policies page to review and manage all the policies you created across your organization to discover and safeguard AI activity in one centralized place. This Policies page also enables you to edit the policies and investigate alerts associated with those policies.
Warning
Policies that didn't originate from the Recommendations tab can also appear in the Policies tab. This situation occurs when DSPM for AI identifies them as policies to secure and govern all AI apps.
On the Policies page, review the policies that you created back on the Recommendations page. When you select a policy, its detail pane appears. Some policies allow to you edit them and investigate their alerts. Open some of the policies and play around with the edit and investigate alert buttons.
Once you finish reviewing the features on the Policies page, select Reports in the DSPM for AI navigation pane.
The Reports page allows you to run reports that display overall AI activity, sensitive AI activity, unethical AI activity, and so on. You can filter the report by the options that appear at the top of the page, where the default filter is Microsoft Copilot Experiences.
For each report, you can select its View details button to see the detailed activities in the Activity Explorer tool. The features on the Activity Explorer tool aren't programmed in this simulation, but in a live tenant, you can use available filters to view activities from Microsoft Copilot experiences based on different Activity type, AI app category, App type, and Scope (which support administrative units for DSPM for AI), and more. You can then drill down to each activity to view details including the capability to view prompts and response with the right permissions.
Review as many of the reports as you wish. After reviewing a report in Activity Explorer, select the Reports tab in the DSPM for AI navigation pane to return to the Reports page.
You can finish this simulation by selecting the Data risk assessments tab in the DSPM for AI navigation pane. Review the assessment information provided.
On the Data risk assessments page, under the Custom assessments (preview) section, select one of the assessments that has already run. On the detail page that appears for that assessment, review the results from the assessment. When you're done, select Data risk assessments in the DSPM for AI navigation pane to return to the Data risk assessments page.
If you want, select +Create custom assessment. Doing so initiates the Create custom assessment wizard.
On the Basic details page, enter a name for the assessment and select Next.
On the Add users page, the All option is selected by default, so select Next.
On the Add data sources to assess page, all sites within SharePoint is selected by default, so select Next.
On the Review and run the data assessment scan page, select Save and run.
On the Data assessment successfully created page, select Done.
Back on the Data risk assessments page, under the Custom assessment (preview) section, note the new assessment that you created, which should be in progress,