Uredi

Deli z drugimi prek


How to detect and redact Personally Identifying Information (PII) in conversations

The Conversational PII feature can evaluate conversations to extract sensitive information (PII) in the content across several pre-defined categories and redact them. This API operates on both transcribed text (referenced as transcripts) and chats. For transcripts, the API also enables redaction of audio segments, which contains the PII information by providing the audio timing information for those audio segments.

Determine how to process the data (optional)

Specify the PII detection model

By default, this feature will use the latest available AI model on your input. You can also configure your API requests to use a specific model version.

Language support

See the PII Language Support page for more details. Currently the conversational PII GA model only supports the English language. The preview model and API support the same list languages as the other Language services.

Region support

The conversational PII API supports all Azure regions supported by the Language service.

Submitting data

Note

See the Language Studio article for information on formatting conversational text to submit using Language Studio.

You can submit the input to the API as list of conversation items. Analysis is performed upon receipt of the request. Because the API is asynchronous, there may be a delay between sending an API request, and receiving the results. For information on the size and number of requests you can send per minute and second, see the data limits below.

When using the async feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.

When you submit data to conversational PII, you can send one conversation (chat or spoken) per request.

The API will attempt to detect all the defined entity categories for a given conversation input. If you want to specify which entities will be detected and returned, use the optional piiCategories parameter with the appropriate entity categories.

For spoken transcripts, the entities detected will be returned on the redactionSource parameter value provided. Currently, the supported values for redactionSource are text, lexical, itn, and maskedItn (which maps to Speech to text REST API's display\displayText, lexical, itn and maskedItn format respectively). Additionally, for the spoken transcript input, this API will also provide audio timing information to empower audio redaction. For using the audioRedaction feature, use the optional includeAudioRedaction flag with true value. The audio redaction is performed based on the lexical input format.

Note

Conversation PII now supports 40,000 characters as document size.

Getting PII results

When you get results from PII detection, you can stream the results to an application or save the output to a file on the local system. The API response will include recognized entities, including their categories and subcategories, and confidence scores. The text string with the PII entities redacted will also be returned.

Examples

  1. Go to your resource overview page in the Azure portal

  2. From the menu on the left side, select Keys and Endpoint. You'll need one of the keys and the endpoint to authenticate your API requests.

  3. Download and install the client library package for your language of choice:

    Language Package version
    .NET 1.0.0
    Python 1.0.0
  4. See the following reference documentation for more information on the client, and return object:

Service and data limits

For information on the size and number of requests you can send per minute and second, see the service limits article.