Content Moderator documentation
The Azure AI Content Moderator API checks text, image, and video content for material that is potentially offensive, risky, or otherwise undesirable. Important: Try Azure AI Content Safety, which offers advanced AI features and enhanced performance. Azure AI Content Safety is a comprehensive solution designed to detect harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety is suitable for many scenarios such as online marketplaces, gaming companies, social messaging platforms, enterprise media companies, and K-12 education solution providers.