Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Responsible AI app innovation can't proliferate without effective data security and governance built seamlessly into how developers work. Enterprise customers demand AI applications that are secure by design and compliant with internal and external regulations. Failure to meet these expectations can result in:
Rejection during security and compliance reviews
Risk of data leaks or unethical AI behavior
Loss of customer trust and adoption
As AI applications increasingly handle sensitive enterprise data, developers are required to shift left in their app design and development process and embed security and compliance considerations throughout the application lifecycle. Microsoft Purview APIs (in Public Preview), enable Azure AI Foundry and AI developers using other platforms to integrate enterprise-grade data security and compliance controls into custom AI apps and agents across any platform and model for the following outcomes:
- Govern AI runtime data through
- Real-time analytics for sensitive data usage, risky behaviors, and unethical AI interactions
- Auditing for traceability
- Communication Compliance to detect harmful or unauthorized content
- Data Lifecycle Management and eDiscovery for legal and regulatory needs
- Protect against data leaks and insider risks
- Prevent data oversharing by honoring sensitivity labels and supporting label inheritance from grounding data sources
Govern AI runtime data
Azure AI Foundry apps can support integration with Microsoft Purview in two ways:
SDK-Based Integration: Foundry developers can use Microsoft Purview APIs to programmatically send the prompts and responses data from their AI apps into Microsoft Purview.
Native Integration with Azure AI Foundry: Microsoft Purview is embedded directly into Azure AI Foundry to support the audit and related governance outcomes. Azure Admins can turn on the setting for any given Azure subscription to achieve the value. This setting enables data from all Azure AI based applications running in that subscription to be sent to Microsoft Purview to support governance and compliance outcomes.
APIs
Protect against data leaks and insider risks
SDK-Based Integration: Today, Foundry developers can integrate Microsoft Purview APIs to support enforcing Microsoft Purview’s Data loss prevention (DLP) policies in their applications. This allows applications to understand and enforce AI app behavior according to the policies set within Microsoft Purview (for example, for sensitive information shared with Large Language Models (LLMs), risky users interacting with sensitive information in the AI apps), thus ensuring data loss prevention.
Native Integration with Azure AI Foundry: <Upcoming>
References
APIs
Prevent data oversharing
SDK-Based Integration: Foundry developers can integrate their AI app with Microsoft Purview APIs to honor sensitivity labels on grounding data used to generate responses by the LLM. This ensures that data oversharing is prevented, and users, in the context of AI, get to access only that content that they have access to outside of the AI.
Native Integration with Azure AI Foundry: <Upcoming>
APIs
- List sensitivity labels
- Get a sensitivity labels
- List rights
- Compute inheritance
- Compute rights and inheritance
Conclusion
Security and compliance aren't optional—they're foundational to enterprise AI adoption. By integrating Microsoft Purview across the application lifecycle, developers can build AI solutions that are secure, compliant, and enterprise-ready, and ensure long-term trust and compliance within their AI solutions.
Start integrating with Microsoft Purview today to build secure AI apps.