Modifica

Condividi tramite


Content filtering severity levels

Content filtering ensures that AI-generated outputs align with ethical guidelines and safety standards. Azure OpenAI provides content filtering capabilities to help identify and mitigate risks associated with various categories of harmful or inappropriate content. This document outlines the key risk categories and their descriptions to help you better understand the built-in content filtering system.

Note

The text content filtering models for the hate, sexual, violence, and self-harm categories have been specifically trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.

Text content

Warning

The Severity definitions tab in this document contains examples of harmful content that may be disturbing to some readers.

Image content

Warning

The Severity definitions tab in this document contains examples of harmful content that may be disturbing to some readers.