Episode

Azure AI Content Safety Text Moderation

Azure AI Studio provides the capability for you to quickly try out text moderation. The moderate text content tool considers several factors such as the type of content, the platform's policies, and the potential effect on users. You can run moderation tests on sample content. Configure filters to rerun and further refine the test results. And add specific terms to the blocklist that you want to detect and act on.

In this demo, we'll demonstrate the basic functionality of running a simple test and how you can configure the filters and adjust the harm category threshold levels to suit your needs.

Disclosure: This demo contains an AI-generated voice.

Chapters

  • 00:00 - Introduction
  • 00:44 - Test safe content
  • 01:27 - Test harmful content
  • 02:44 - Multiple risk categories
  • 03:59 - Content with misspelling

Azure