Prompt Shields (preview)

Important

Some of the features described in this article might only be available in preview. This preview is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

Learn how to use Azure AI Content Safety Prompt Shields to check large language model (LLM) inputs for both User Prompt attacks and Document attacks.

Prerequisites

  • An Azure subscription - Create one for free
  • Once you have your Azure subscription, create a Content Safety resource in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (East US or West Europe), and supported pricing tier. Then select Create.
  • An AI Studio hub in Azure AI Studio.

Setting up

  1. Sign in to Azure AI Studio.
  2. Select the hub you'd like to work in.
  3. On the left nav menu, select AI Services. Select the Content Safety panel. Screenshot of the Azure AI Studio Content Safety panel selected.
  4. Then, select Prompt Shields.
  5. On the next page, in the drop-down menu under Try it out, select the Azure AI Services connection you want to use.

Analyze prompt attacks

Either select a sample scenario or write your own inputs in the text boxes provided. Prompt Shields analyzes both the user prompt and any documents included with the prompt for potential attacks.

Select Run test to get the result.

Next steps

Configure content filters for each provided category to match your use case.