Moderate content and detect harm in Azure AI Studio with Content Safety
Learn how to choose and build a content moderation system in Azure AI Studio.
Note
This module is a guided project module where you complete an end-to-end project by following step-by-step instructions.
Learning objectives
By the end of this module, you're able to:
- Configure filters and threshold levels to block harmful content.
- Perform text and image moderation for harmful content.
- Analyze and improve Precision, Recall, and F1 Score metrics.
- Detect groundedness in a model’s output.
- Identify and block AI-generated copyrighted content.
- Mitigate direct and indirect prompt injections.
- Output filter configurations to code.
Prerequisites
- An Azure subscription - Create one for free
- Familiarity with Azure and the Azure portal