QuickStart: Moderate text and images with content safety in Azure AI Studio

Note

Azure AI Studio is currently in public preview. This preview is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

In this quickstart, get started with the Azure AI Content Safety service in Azure AI Studio. Content Safety detects harmful user-generated and AI-generated content in applications and services.

Caution

Some of the sample content provided by Azure AI Studio might be offensive. Sample images are blurred by default. User discretion is advised.

Prerequisites

Note

This feature isn't available if you created an Azure AI hub resource together with an existing Azure OpenAI Service resource. You must create an AI hub with an Azure AI services provider. We're gradually rolling out this feature to all customers. If you don't see it yet, check back later.

Moderate text or images

Select one of the following tabs to get started with content safety in Azure AI Studio.

Azure AI Studio provides a capability for you to quickly try out text moderation. The moderate text content tool takes into account various factors such as the type of content, the platform's policies, and the potential effect on users. Run moderation tests on sample content. Use Configure filters to rerun and further fine tune the test results. Add specific terms to the blocklist that you want detect and act on.

  1. Sign in to Azure AI Studio and select Explore from the top menu.

  2. Select Content safety panel under Responsible AI.

  3. Select Try it out in the Moderate text content panel.

    Screenshot of the moderate text content tool in the Azure AI Studio explore tab.

  4. Enter text in the Test field, or select sample text from the panels on the page.

    Screenshot of the moderate image content page.

  5. Optionally, you can use slide controls in the Configure filters tab to modify the allowed or prohibited severity levels for each category.

  6. Select Run test.

The service returns all the categories that were detected, the severity level for each (0-Safe, 2-Low, 4-Medium, 6-High), and a binary Accept or Reject judgment. The result is based in part on the filters you configure.

The Use blocklist tab lets you create, edit, and add a blocklist to the moderation workflow. If you have a blocklist enabled when you run the test, you get a Blocklist detection panel under Results. It reports any matches with the blocklist.

View and export code

You can use the View Code feature in both moderate text content or moderate image content page to view and copy the sample code, which includes configuration for severity filtering, blocklists, and moderation functions. You can then deploy the code on your end.

Screenshot of viewing the code in the moderate text content page.

Clean up resources

To avoid incurring unnecessary Azure costs, you should delete the resources you created in this quickstart if they're no longer needed. To manage resources, you can use the Azure portal.

Next steps