Moderate Content and Detect Harm with Azure AI Content Safety

Beginner
Developer
Azure

Learn how to moderate content and detect harm with Azure AI Content Safety.

Learning objectives

By the end of this module, you're able to:

  • Perform text and image moderation for harmful content.
  • Detect groundedness in a models output.
  • Identify and block AI-generated copyrighted content.
  • Mitigate direct and indirect prompt injections.

Prerequisites