Events
Mar 17, 9 PM - Mar 21, 10 AM
Join the meetup series to build scalable AI solutions based on real-world use cases with fellow developers and experts.
Register nowThis browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
When you use an Azure OpenAI model deployment with a content filter, you might want to check the results of the filtering activity. You can use that information to further adjust your filter configuration to serve your specific business needs and Responsible AI principles.
Azure AI Foundry provides a Risks & Safety monitoring dashboard for each of your deployments that uses a content filter configuration.
To access Risks & Safety monitoring, you need an Azure OpenAI resource in one of the supported Azure regions: East US, Switzerland North, France Central, Sweden Central, Canada East. You also need a model deployment that uses a content filter configuration.
Go to Azure AI Foundry and sign in with the credentials associated with your Azure OpenAI resource. Select a project. Then select the Models + endpoints tab on the left and then select your model deployment from the list. On the deployment's page, select the Metrics tab at the top. Then select Open in Azure Monitor to view the full report in the Azure portal.
Content filtering data is shown in the following ways:
Adjust your content filter configuration to further align with business needs and Responsible AI principles.
The Potentially abusive user detection pane leverages user-level abuse reporting to show information about users whose behavior has resulted in blocked content. The goal is to help you get a view of the sources of harmful content so you can take responsive actions to ensure the model is being used in a responsible way.
To use Potentially abusive user detection, you need:
Caution
Use GUID strings to identify individual users. Do not include sensitive personal information in the user field.
In order to protect the data privacy of user information and manage the permission of the data, we support the option for our customers to bring their own storage to get the detailed potentially abusive user detection insights (including user GUID and statistics on harmful request by category) stored in a compliant way and with full control. Follow these steps to enable it:
The potentially abusive user detection relies on the user information that customers send with their Azure OpenAI API calls, together with the request content. The following insights are shown:
Combine this data with enriched signals to validate whether the detected users are truly abusive or not. If they are, then take responsive action such as throttling or suspending the user to ensure the responsible use of your application.
Next, create or edit a content filter configuration in Azure AI Foundry.
Events
Mar 17, 9 PM - Mar 21, 10 AM
Join the meetup series to build scalable AI solutions based on real-world use cases with fellow developers and experts.
Register nowTraining
Module
Measure and mitigate risks for a generative AI app in Azure AI Foundry - Training
Learn how to measure and mitigate risks for a generative AI app leveraging responsible AI tools and features within Azure AI Foundry.
Certification
Microsoft Certified: Azure AI Engineer Associate - Certifications
Design and implement an Azure AI solution using Azure AI services, Azure AI Search, and Azure Open AI.