What is Azure Content Moderator?


Try out Azure Content Safety for the latest in AI-powered content moderation.

The Content Moderator Review APIs and online Review Tool were retired on 12/31/2021. The Moderation APIs are still available.

Azure Content Moderator is an AI service that lets you handle content that is potentially offensive, risky, or otherwise undesirable. It includes the AI-powered content moderation service which scans text, image, and videos and applies content flags automatically.

You may want to build content filtering software into your app to comply with regulations or maintain the intended environment for your users.

This documentation contains the following article types:

  • Quickstarts are getting-started instructions to guide you through making requests to the service.
  • How-to guides contain instructions for using the service in more specific or customized ways.
  • Concepts provide in-depth explanations of the service functionality and features.
  • Tutorials are longer guides that show you how to use the service as a component in broader business solutions.

For a more structured approach, follow a Training module for Content Moderator.

Where it's used

The following are a few scenarios in which a software developer or team would require a content moderation service:

  • Online marketplaces that moderate product catalogs and other user-generated content.
  • Gaming companies that moderate user-generated game artifacts and chat rooms.
  • Social messaging platforms that moderate images, text, and videos added by their users.
  • Enterprise media companies that implement centralized moderation for their content.
  • K-12 education solution providers filtering out content that is inappropriate for students and educators.


You cannot use Content Moderator to detect illegal child exploitation images. However, qualified organizations can use the PhotoDNA Cloud Service to screen for this type of content.

What it includes

The Content Moderator service consists of several web service APIs available through both REST calls and a .NET SDK.

Moderation APIs

The Content Moderator service includes Moderation APIs, which check content for material that is potentially inappropriate or objectionable.

block diagram for Content Moderator moderation APIs

The following table describes the different types of moderation APIs.

API group Description
Text moderation Scans text for offensive content, sexually explicit or suggestive content, profanity, and personal data.
Custom term lists Scans text against a custom list of terms along with the built-in terms. Use custom lists to block or allow content according to your own content policies.
Image moderation Scans images for adult or racy content, detects text in images with the Optical Character Recognition (OCR) capability, and detects faces.
Custom image lists Scans images against a custom list of images. Use custom image lists to filter out instances of commonly recurring content that you don't want to classify again.
Video moderation Scans videos for adult or racy content and returns time markers for said content.

Data privacy and security

As with all of the Cognitive Services, developers using the Content Moderator service should be aware of Microsoft's policies on customer data. See the Cognitive Services page on the Microsoft Trust Center to learn more.

Next steps

To get started using Content Moderator on the web portal, follow Try Content Moderator on the web. Or, complete a client library or REST API quickstart to implement the basic scenarios in code.