Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Shadow AI presents a significant challenge for organizations as it involves employees utilizing AI tools without proper oversight. This can lead to various risks, including data breaches, compliance issues, and the potential for misuse of sensitive information.
It's crucial to understand that while these tools can enhance productivity, their unregulated use can undermine security protocols. Organizations need to establish clear guidelines and governance frameworks to manage the adoption of AI technologies effectively. By doing so, you can harness the benefits of AI while mitigating the associated risks.
In this deployment blueprint, we provide a recommended approach to prevent data leak to shadow AI utilizing Microsoft Purview, Microsoft Defender for Cloud Apps, Microsoft Entra, and Microsoft Intune.
The blueprint breaks the deployment into four phases:
Discover AI apps
Block user access to unsanctioned AI apps
Block sensitive data going to sanctioned AI apps
Govern data sent to AI apps in Microsoft Edge
The blueprint provides:
An overview of what is shadow AI and its risks
A story of data leakage via AI
A recommended, staged approach to protect your organization from shadow AI
Detailed guidance for using the tools in Microsoft Defender for Cloud Apps, Microsoft Entra, Microsoft Intune, and Microsoft Purview to prevent data leak to shadow AI.
Download the blueprint and documentation
| Deployment model | Description |
|---|---|
| Use this deployment model to assist organizations in identifying and preventing data leak to shadow AI. This model includes |