Events
Take the Microsoft Learn Challenge
Nov 19, 11 PM - Jan 10, 11 PM
Ignite Edition - Build skills in Microsoft Azure and earn a digital badge by January 10!
Register nowThis browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
This article explains how you can get started with the Azure AI Content Safety service using Content Safety Studio in your browser.
Caution
Some of the sample content provided by Content Safety Studio might be offensive. Sample images are blurred by default. User discretion is advised.
Important
You must assign the Cognitive Services User role to your Azure account to use the studio experience. Go to the Azure portal, navigate to your Content Safety resource or Azure AI Services resource, and select Access Control in the left navigation bar, then select + Add role assignment, choose the Cognitive Services User role, and select the member of your account that you need to assign this role to, then review and assign. It might take few minutes for the assignment to take effect.
The Moderate text content page provides the capability for you to quickly try out text moderation.
The service returns all the categories that were detected, with the severity level for each: 0-Safe, 2-Low, 4-Medium, 6-High. It also returns a binary Accepted/Rejected result, based on the filters you configure. Use the matrix in the Configure filters tab to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
The Use blocklist tab lets you create, edit, and add a blocklist to the moderation workflow. If you have a blocklist enabled when you run the test, you get a Blocklist detection panel under Results. It reports any matches with the blocklist.
The Prompt Shields panel lets you try out user input risk detection. Detect User Prompts designed to provoke the Generative AI model into exhibiting behaviors it was trained to avoid or break the rules set in the System Message. These attacks can vary from intricate role-play to subtle subversion of the safety objective.
The service returns the risk flag and type for each sample.
For more information, see the Prompt Shields conceptual guide.
The Moderate image content page provides capability for you to quickly try out image moderation.
The service returns all the categories that were detected, with the severity level for each: 0-Safe, 2-Low, 4-Medium, 6-High. It also returns a binary Accepted/Rejected result, based on the filters you configure. Use the matrix in the Configure filters tab on the right to set your allowed/prohibited severity levels for each category. Then you can run the text again to see how the filter works.
You can use the View Code feature in either the Analyze text content or Analyze image content pages to view and copy the sample code, which includes configuration for severity filtering, blocklists, and moderation functions. You can then deploy the code on your end.
The Monitor online activity panel lets you view your API usage and trends.
You can choose which Media type to monitor. You can also specify the time range that you want to check by selecting Show data for the last __.
In the Reject rate per category chart, you can also adjust the severity thresholds for each category.
You can also edit blocklists if you want to change some terms, based on the Top 10 blocked terms chart.
To view resource details such as name and pricing tier, select the Settings icon in the top-right corner of the Content Safety Studio home page and select the Resource tab. If you have other resources, you can switch resources here as well.
If you want to clean up and remove an Azure AI services resource, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
Next, get started using Azure AI Content Safety through the REST APIs or a client SDK, so you can seamlessly integrate the service into your application.
Events
Take the Microsoft Learn Challenge
Nov 19, 11 PM - Jan 10, 11 PM
Ignite Edition - Build skills in Microsoft Azure and earn a digital badge by January 10!
Register nowTraining
Module
Moderate Content and Detect Harm with Azure AI Content Safety - Training
Learn how to choose and build a content moderation system with Azure AI Content Safety.
Certification
Microsoft Certified: Azure AI Engineer Associate - Certifications
Design and implement an Azure AI solution using Azure AI services, Azure AI Search, and Azure Open AI.
Documentation
Quickstart: Analyze text content - Azure AI services
Get started using Azure AI Content Safety to analyze text content for objectionable material.
What is Azure AI Content Safety? - Azure AI services
Learn how to use Content Safety to track, flag, assess, and filter inappropriate material in user-generated content.
Azure AI Content Safety documentation - Quickstarts, Tutorials, API Reference - Azure AI services
The cloud-based Azure AI Content Safety API provides developers with access to advanced algorithms for processing images and text and flagging content that is potentially offensive, risky, or otherwise undesirable.