Share via


Cognitive Services: Azure Content Moderator API

Introduction

Azure Content moderator enhances your ability to detect potentially offensive or unwanted images through machine-learning based classifiers, custom blacklists and optical character recognition (OCR).

Azure Content Moderator API is a cognitive service that checks text, image, and video content for material that is potentially offensive, risky, or otherwise undesirable. When such material is found, the service applies appropriate labels (flags) to the content. Mobile Application/ Website handle flagged content in order to comply with regulations or maintain the intended environment for users.

  

Users across the global are generating billions of kilobytes of contents and are publishing over the internet in various forum such as text, image, video, blog posts, reviews, feedback etc. How you can make filter all application content. Azure Content Moderator will help different domain companies as below

 K-12 education solution providers filtering the content that is inappropriate for students and educators

 Company / Applications   For Use 
 Online Selling Online shopping company will moderate product catalogs, Review and other user-generated content.
Gaming Gaming companies that moderate user-generated game artifacts and chat rooms.
Social messaging Social messaging platforms that moderate images, text, and videos added by the users.
Media Enterprise media companies that implement centralized moderation for the content.
Educations Solutions  .

Azure Moderation APIs 

Content Moderator also checks for the possible personally identifiable information (PII). Each Text API call can contain up to 1,024 characters each. Scan images (minimum 128 pixels, maximum 4MB size) for adult and racy content, optical character recognition (OCR) and face detection. You can also match against custom image lists and custom text. Each API call is a transaction. Several web service APIs available through both REST calls and a .NET SDK.

It also includes the human review tool, which allows human reviewers to aid the service and improve or fine-tune its moderation function.

The Azure content Moderators API published by following 

Text Moderation API

  • Scans text for offensive content, sexually explicit or suggestive content, profanity, and personal data. 
  • Scans text against a custom list of terms in addition to the build in terms. Use custom lists to block or allow content according to your own /company content policies .
  • Important : you can make sure text can be at most 1024 character long, if it's not, the text API will return an error code that inform that the text is longer than permitted.

Image Moderation API

  • Scans images for adult or racy content, detects text in images with the Optical Character Recognition (OCR) capability, and detects faces.
  • Scans images against a custom list of images. Use custom image lists to filter out instances of commonly recurring content that you don't want to classify again.
  • Important :When using the API, images need to have a minimum of 128 pixels and a maximum file size of 4MB. If it’s not, the image API will return an error code that informs that the image does not meet the size requirements.

Video Moderation API 

  • Scans videos for adult or racy content and returns time markers for said content.

Summary

I hope you have understood the What is Azure Content moderator and list of moderator API’s. The next step is how to create the Azure API and implement to the application using C# . Please leave your feedback/query using the comments box, and if you like this article, please share it with your friends.

References