Events
Mar 17, 11 p.m. - Mar 21, 11 p.m.
Join the meetup series to build scalable AI solutions based on real-world use cases with fellow developers and experts.
Register nowThis browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Important
Azure Content Moderator is deprecated as of February 2024 and will be retired by February 2027. It is replaced by Azure AI Content Safety, which offers advanced AI features and enhanced performance.
Azure AI Content Safety is a comprehensive solution designed to detect harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety is suitable for many scenarios such as online marketplaces, gaming companies, social messaging platforms, enterprise media companies, and K-12 education solution providers. Here's an overview of its features and capabilities:
Azure AI Content Safety provides a robust and flexible solution for your content moderation needs. By switching from Content Moderator to Azure AI Content Safety, you can take advantage of the latest tools and technologies to ensure that your content is always moderated to your exact specifications.
Learn more about Azure AI Content Safety and explore how it can elevate your content moderation strategy.
You can use Azure Content Moderator's text moderation models to analyze text content, such as chat rooms, discussion boards, chatbots, e-commerce catalogs, and documents.
The service response includes the following information:
If the API detects any profane terms in any of the supported languages, those terms are included in the response. The response also contains their location (Index
) in the original text. The ListId
in the following sample JSON refers to terms found in custom term lists if available.
"Terms": [
{
"Index": 118,
"OriginalIndex": 118,
"ListId": 0,
"Term": "<offensive word>"
}
Note
For the language
parameter, assign eng
or leave it empty to see the machine-assisted classification response (preview feature). This feature supports English only.
For profanity terms detection, use the ISO 639-3 code of the supported languages listed in this article, or leave it empty.
Content Moderator's machine-assisted text classification feature supports English only, and helps detect potentially undesired content. The flagged content might be assessed as inappropriate depending on context. It conveys the likelihood of each category. The feature uses a trained model to identify possible abusive, derogatory, or discriminatory language. This includes slang, abbreviated words, offensive, and intentionally misspelled words.
The following extract in the JSON extract shows an example output:
"Classification": {
"ReviewRecommended": true,
"Category1": {
"Score": 1.5113095059859916E-06
},
"Category2": {
"Score": 0.12747249007225037
},
"Category3": {
"Score": 0.98799997568130493
}
}
Category1
refers to the potential presence of language that might be considered sexually explicit or adult in certain situations.Category2
refers to the potential presence of language that might be considered sexually suggestive or mature in certain situations.Category3
refers to the potential presence of language that might be considered offensive in certain situations.Score
is between 0 and 1. The higher the score, the higher the probability that the category might be applicable. This feature relies on a statistical model rather than manually coded outcomes. We recommend testing with your own content to determine how each category aligns to your requirements.ReviewRecommended
is either true or false depending on the internal score thresholds. Customers should assess whether to use this value or decide on custom thresholds based on their content policies.The personal data feature detects the potential presence of this information:
The following example shows a sample response:
"pii":{
"email":[
{
"detected":"abcdef@abcd.com",
"sub_type":"Regular",
"text":"abcdef@abcd.com",
"index":32
}
],
"ssn":[
],
"ipa":[
{
"sub_type":"IPV4",
"text":"255.255.255.255",
"index":72
}
],
"phone":[
{
"country_code":"US",
"text":"6657789887",
"index":56
}
],
"address":[
{
"text":"1 Microsoft Way, Redmond, WA 98052",
"index":89
}
]
}
The text moderation response can optionally return the text with basic autocorrection applied.
For example, the following input text has a misspelling.
The quick brown fox jumps over the lazzy dog.
If you specify autocorrection, the response contains the corrected version of the text:
The quick brown fox jumps over the lazy dog.
While the default, global list of terms works great for most cases, you might want to screen against terms that are specific to your business needs. For example, you might want to filter out any competitive brand names from posts by users.
Note
There is a maximum limit of five term lists with each list to not exceed 10,000 terms.
The following example shows the matching List ID:
"Terms": [
{
"Index": 118,
"OriginalIndex": 118,
"ListId": 231.
"Term": "<offensive word>"
}
The Content Moderator provides a Term List API with operations for managing custom term lists. Check out the Term Lists .NET quickstart if you're familiar with Visual Studio and C#.
Events
Mar 17, 11 p.m. - Mar 21, 11 p.m.
Join the meetup series to build scalable AI solutions based on real-world use cases with fellow developers and experts.
Register nowTraining
Module
Use AI responsibly with Azure AI Content Safety - Training
As the amount of user-generated online content increases, so does the need to ensure harmful material is moderated effectively. Azure AI Content Safety resource includes features to help organizations moderate and manage both user-generated and AI-generated content.
Certification
Microsoft Certified: Azure AI Engineer Associate - Certifications
Design and implement an Azure AI solution using Azure AI services, Azure AI Search, and Azure Open AI.