Moderate content and detect harm with Azure AI Content Safety Studio

Beginner
Developer
Azure

Learn how to choose and build a content moderation system in the Azure AI Content Safety Studio.

Note

This is a guided project module where you’ll complete an end-to-end project by following step-by-step instructions.

Learning objectives

By the end of this module, you're able to:

  • Configure filters and threshold levels to block harmful content.
  • Perform text and image moderation for harmful content.
  • Analyze and improve Precision, Recall, and F1 Score metrics.
  • Detect groundedness in a model’s output.
  • Identify and block AI-generated copyrighted content.
  • Mitigate direct and indirect prompt injections.
  • Output filter configurations to code.

Prerequisites

  • An Azure subscription - Create one for free
  • Familiarity with Azure and the Azure portal
  • Access to Azure OpenAI Services