QuickStart: Analyze text content
Get started with the Content Safety Studio, REST API, or client SDKs to do basic text moderation. The Azure AI Content Safety service provides you with AI algorithms for flagging objectionable content. Follow these steps to try it out.
For more information on text moderation, see the Harm categories concept page. For API input limits, see the Input requirements section of the Overview.
Note
The sample data and code may contain offensive content. User discretion is advised.
Prerequisites
- An Azure subscription - Create one for free
- Once you have your Azure subscription, create a Content Safety resource in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (see Region availability), and supported pricing tier. Then select Create.
- The resource takes a few minutes to deploy. After it finishes, Select go to resource. In the left pane, under Resource Management, select Subscription Key and Endpoint. The endpoint and either of the keys are used to call APIs.
- cURL installed
Analyze text content
The following section walks through a sample request with cURL. Paste the command below into a text editor, and make the following changes.
- Replace
<endpoint>
with the endpoint URL associated with your resource. - Replace
<your_subscription_key>
with one of the keys that come with your resource. - Optionally, replace the
"text"
field in the body with your own text you'd like to analyze.
curl --location --request POST '<endpoint>/contentsafety/text:analyze?api-version=2024-09-01' \
--header 'Ocp-Apim-Subscription-Key: <your_subscription_key>' \
--header 'Content-Type: application/json' \
--data-raw '{
"text": "I hate you",
"categories": ["Hate", "Sexual", "SelfHarm", "Violence"],
"blocklistNames": ["string"],
"haltOnBlocklistHit": true,
"outputType": "FourSeverityLevels"
}'
The below fields must be included in the url:
Name | Required | Description | Type |
---|---|---|---|
API Version | Required | This is the API version to be checked. The current version is: api-version=2024-09-01. Example: <endpoint>/contentsafety/text:analyze?api-version=2024-09-01 |
String |
The parameters in the request body are defined in this table:
Name | Required | Description | Type |
---|---|---|---|
text | Required | This is the raw text to be checked. Other non-ascii characters can be included. | String |
categories | Optional | This is assumed to be an array of category names. See the Harm categories guide for a list of available category names. If no categories are specified, all four categories are used. We use multiple categories to get scores in a single request. | String |
blocklistNames | Optional | Text blocklist Name. Only support following characters: 0-9 A-Z a-z - . _ ~ . You could attach multiple list names here. |
Array |
haltOnBlocklistHit | Optional | When set to true , further analyses of harmful content won't be performed in cases where blocklists are hit. When set to false , all analyses of harmful content will be performed, whether or not blocklists are hit. |
Boolean |
outputType | Optional | "FourSeverityLevels" or "EightSeverityLevels" . Output severities in four or eight levels, the value can be 0,2,4,6 or 0,1,2,3,4,5,6,7 . |
String |
See the following sample request body:
{
"text": "I hate you",
"categories": ["Hate", "Sexual", "SelfHarm", "Violence"],
"blocklistNames": ["array"],
"haltOnBlocklistHit": false,
"outputType": "FourSeverityLevels"
}
Open a command prompt window, paste in the edited cURL command, and run it.
Output
You should see the text moderation results displayed as JSON data in the console output. For example:
{
"blocklistsMatch": [
{
"blocklistName": "string",
"blocklistItemId": "string",
"blocklistItemText": "string"
}
],
"categoriesAnalysis": [
{
"category": "Hate",
"severity": 2
},
{
"category": "SelfHarm",
"severity": 0
},
{
"category": "Sexual",
"severity": 0
},
{
"category": "Violence",
"severity": 0
}
]
}
The JSON fields in the output are defined here:
Name | Description | Type |
---|---|---|
categoriesAnalysis | Each output class that the API predicts. Classification can be multi-labeled. For example, when a text sample is run through the text moderation model, it could be classified as both sexual content and violence. Harm categories | String |
Severity | The higher the severity of input content, the larger this value is. | Integer |
Reference documentation | Library source code | Package (NuGet) | Samples
Prerequisites
- An Azure subscription - Create one for free
- The Visual Studio IDE with workload .NET desktop development enabled. Or if you don't plan on using Visual Studio IDE, you need the current version of .NET Core.
- Once you have your Azure subscription, create a Content Safety resource in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (see Region availability), and supported pricing tier. Then select Create.
- The resource takes a few minutes to deploy. After it finishes, Select go to resource. In the left pane, under Resource Management, select Subscription Key and Endpoint. The endpoint and either of the keys are used to call APIs.
Set up application
Create a new C# application.
Open Visual Studio, and under Get started select Create a new project. Set the template filters to C#/All Platforms/Console. Select Console App (command-line application that can run on .NET on Windows, Linux and macOS) and choose Next. Update the project name to ContentSafetyQuickstart and choose Next. Select .NET 6.0 or above, and choose Create to create the project.
Install the client SDK
Once you've created a new project, install the client SDK by right-clicking on the project solution in the Solution Explorer and selecting Manage NuGet Packages. In the package manager that opens select Browse, and search for Azure.AI.ContentSafety
. Select Install.
Create environment variables
In this example, you'll write your credentials to environment variables on the local machine running the application.
To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.
- To set the
CONTENT_SAFETY_KEY
environment variable, replaceYOUR_CONTENT_SAFETY_KEY
with one of the keys for your resource. - To set the
CONTENT_SAFETY_ENDPOINT
environment variable, replaceYOUR_CONTENT_SAFETY_ENDPOINT
with the endpoint for your resource.
Important
If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.
For more information about AI services security, see Authenticate requests to Azure AI services.
setx CONTENT_SAFETY_KEY 'YOUR_CONTENT_SAFETY_KEY'
setx CONTENT_SAFETY_ENDPOINT 'YOUR_CONTENT_SAFETY_ENDPOINT'
After you add the environment variables, you might need to restart any running programs that will read the environment variables, including the console window.
Analyze text content
From the project directory, open the Program.cs file that was created previously. Paste in the following code:
using System;
using Azure.AI.ContentSafety;
namespace Azure.AI.ContentSafety.Dotnet.Sample
{
class ContentSafetySampleAnalyzeText
{
public static void AnalyzeText()
{
// retrieve the endpoint and key from the environment variables created earlier
string endpoint = Environment.GetEnvironmentVariable("CONTENT_SAFETY_ENDPOINT");
string key = Environment.GetEnvironmentVariable("CONTENT_SAFETY_KEY");
ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
string text = "Your input text";
var request = new AnalyzeTextOptions(text);
Response<AnalyzeTextResult> response;
try
{
response = client.AnalyzeText(request);
}
catch (RequestFailedException ex)
{
Console.WriteLine("Analyze text failed.\nStatus code: {0}, Error code: {1}, Error message: {2}", ex.Status, ex.ErrorCode, ex.Message);
throw;
}
Console.WriteLine("\nAnalyze text succeeded:");
Console.WriteLine("Hate severity: {0}", response.Value.CategoriesAnalysis.FirstOrDefault(a => a.Category == TextCategory.Hate)?.Severity ?? 0);
Console.WriteLine("SelfHarm severity: {0}", response.Value.CategoriesAnalysis.FirstOrDefault(a => a.Category == TextCategory.SelfHarm)?.Severity ?? 0);
Console.WriteLine("Sexual severity: {0}", response.Value.CategoriesAnalysis.FirstOrDefault(a => a.Category == TextCategory.Sexual)?.Severity ?? 0);
Console.WriteLine("Violence severity: {0}", response.Value.CategoriesAnalysis.FirstOrDefault(a => a.Category == TextCategory.Violence)?.Severity ?? 0);
}
static void Main()
{
AnalyzeText();
}
}
}
Replace "Your input text"
with the text content you'd like to use.
Build and run the application by selecting Start Debugging from the Debug menu at the top of the IDE window (or press F5).
Reference documentation | Library source code | Package (PyPI) | Samples |
Prerequisites
- An Azure subscription - Create one for free
- Once you have your Azure subscription, create a Content Safety resource in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (see Region availability), and supported pricing tier. Then select Create.
- The resource takes a few minutes to deploy. After it finishes, Select go to resource. In the left pane, under Resource Management, select Subscription Key and Endpoint. The endpoint and either of the keys are used to call APIs.
- Python 3.x
- Your Python installation should include pip. You can check if you have pip installed by running
pip --version
on the command line. Get pip by installing the latest version of Python.
- Your Python installation should include pip. You can check if you have pip installed by running
Create environment variables
In this example, you'll write your credentials to environment variables on the local machine running the application.
To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.
- To set the
CONTENT_SAFETY_KEY
environment variable, replaceYOUR_CONTENT_SAFETY_KEY
with one of the keys for your resource. - To set the
CONTENT_SAFETY_ENDPOINT
environment variable, replaceYOUR_CONTENT_SAFETY_ENDPOINT
with the endpoint for your resource.
Important
If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.
For more information about AI services security, see Authenticate requests to Azure AI services.
setx CONTENT_SAFETY_KEY 'YOUR_CONTENT_SAFETY_KEY'
setx CONTENT_SAFETY_ENDPOINT 'YOUR_CONTENT_SAFETY_ENDPOINT'
After you add the environment variables, you might need to restart any running programs that will read the environment variables, including the console window.
Analyze text content
The following section walks through a sample request with the Python SDK.
Open a command prompt, navigate to your project folder, and create a new file named quickstart.py.
Run this command to install the Azure AI Content Safety library:
pip install azure-ai-contentsafety
Copy the following code into quickstart.py:
import os from azure.ai.contentsafety import ContentSafetyClient from azure.core.credentials import AzureKeyCredential from azure.core.exceptions import HttpResponseError from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory def analyze_text(): # analyze text key = os.environ["CONTENT_SAFETY_KEY"] endpoint = os.environ["CONTENT_SAFETY_ENDPOINT"] # Create an Azure AI Content Safety client client = ContentSafetyClient(endpoint, AzureKeyCredential(key)) # Contruct request request = AnalyzeTextOptions(text="Your input text") # Analyze text try: response = client.analyze_text(request) except HttpResponseError as e: print("Analyze text failed.") if e.error: print(f"Error code: {e.error.code}") print(f"Error message: {e.error.message}") raise print(e) raise hate_result = next(item for item in response.categories_analysis if item.category == TextCategory.HATE) self_harm_result = next(item for item in response.categories_analysis if item.category == TextCategory.SELF_HARM) sexual_result = next(item for item in response.categories_analysis if item.category == TextCategory.SEXUAL) violence_result = next(item for item in response.categories_analysis if item.category == TextCategory.VIOLENCE) if hate_result: print(f"Hate severity: {hate_result.severity}") if self_harm_result: print(f"SelfHarm severity: {self_harm_result.severity}") if sexual_result: print(f"Sexual severity: {sexual_result.severity}") if violence_result: print(f"Violence severity: {violence_result.severity}") if __name__ == "__main__": analyze_text()
Replace
"Your input text"
with the text content you'd like to use.Then run the application with the
python
command on your quickstart file.python quickstart.py
Reference documentation | Library source code | Artifact (Maven) | Samples
Prerequisites
- An Azure subscription - Create one for free
- The current version of the Java Development Kit (JDK)
- The Gradle build tool, or another dependency manager.
- Once you have your Azure subscription, create a Content Safety resource in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (see Region availability), and supported pricing tier. Then select Create.
- The resource takes a few minutes to deploy. After it finishes, Select go to resource. In the left pane, under Resource Management, select Subscription Key and Endpoint. The endpoint and either of the keys are used to call APIs.
Set up application
Create a new Gradle project.
In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.
mkdir myapp && cd myapp
Run the gradle init
command from your working directory. This command will create essential build files for Gradle, including build.gradle.kts, which is used at runtime to create and configure your application.
gradle init --type basic
When prompted to choose a DSL, select Kotlin.
From your working directory, run the following command to create a project source folder:
mkdir -p src/main/java
Navigate to the new folder and create a file called ContentSafetyQuickstart.java.
Install the client SDK
This quickstart uses the Gradle dependency manager. You can find the client library and information for other dependency managers on the Maven Central Repository.
Locate build.gradle.kts and open it with your preferred IDE or text editor. Then copy in the following build configuration. This configuration defines the project as a Java application whose entry point is the class ContentSafetyQuickstart. It imports the Azure AI Vision library.
plugins {
java
application
}
application {
mainClass.set("ContentSafetyQuickstart")
}
repositories {
mavenCentral()
}
dependencies {
implementation(group = "com.azure", name = "azure-ai-contentsafety", version = "1.0.0")
}
Create environment variables
In this example, you'll write your credentials to environment variables on the local machine running the application.
To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.
- To set the
CONTENT_SAFETY_KEY
environment variable, replaceYOUR_CONTENT_SAFETY_KEY
with one of the keys for your resource. - To set the
CONTENT_SAFETY_ENDPOINT
environment variable, replaceYOUR_CONTENT_SAFETY_ENDPOINT
with the endpoint for your resource.
Important
If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.
For more information about AI services security, see Authenticate requests to Azure AI services.
setx CONTENT_SAFETY_KEY 'YOUR_CONTENT_SAFETY_KEY'
setx CONTENT_SAFETY_ENDPOINT 'YOUR_CONTENT_SAFETY_ENDPOINT'
After you add the environment variables, you might need to restart any running programs that will read the environment variables, including the console window.
Analyze text content
Open ContentSafetyQuickstart.java in your preferred editor or IDE and paste in the following code. Replace <your text sample>
with the text content you'd like to use.
import com.azure.ai.contentsafety.ContentSafetyClient;
import com.azure.ai.contentsafety.ContentSafetyClientBuilder;
import com.azure.ai.contentsafety.models.AnalyzeTextOptions;
import com.azure.ai.contentsafety.models.AnalyzeTextResult;
import com.azure.ai.contentsafety.models.TextCategoriesAnalysis;
import com.azure.core.credential.KeyCredential;
import com.azure.core.util.Configuration;
public class ContentSafetyQuickstart {
public static void main(String[] args) {
// get endpoint and key from environment variables
String endpoint = System.getenv("CONTENT_SAFETY_ENDPOINT");
String key = System.getenv("CONTENT_SAFETY_KEY");
ContentSafetyClient contentSafetyClient = new ContentSafetyClientBuilder()
.credential(new KeyCredential(key))
.endpoint(endpoint).buildClient();
AnalyzeTextResult response = contentSafetyClient.analyzeText(new AnalyzeTextOptions("<your text sample>"));
for (TextCategoriesAnalysis result : response.getCategoriesAnalysis()) {
System.out.println(result.getCategory() + " severity: " + result.getSeverity());
}
}
}
Navigate back to the project root folder, and build the app with:
gradle build
Then, run it with the gradle run
command:
gradle run
Output
Hate severity: 0
SelfHarm severity: 0
Sexual severity: 0
Violence severity: 0
Reference documentation | Library source code | Package (npm) | Samples |
Prerequisites
- An Azure subscription - Create one for free
- The current version of Node.js
- Once you have your Azure subscription, create a Content Safety resource in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (see Region availability), and supported pricing tier. Then select Create.
- The resource takes a few minutes to deploy. After it finishes, Select go to resource. In the left pane, under Resource Management, select Subscription Key and Endpoint. The endpoint and either of the keys are used to call APIs.
Set up application
Create a new Node.js application. In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.
mkdir myapp && cd myapp
Run the npm init
command to create a node application with a package.json
file.
npm init
Install the client SDK
Install the @azure-rest/ai-content-safety
npm package:
npm install @azure-rest/ai-content-safety
Also install the dotenv
module to use environment variables:
npm install dotenv
Your app's package.json
file will be updated with the dependencies.
Create environment variables
In this example, you'll write your credentials to environment variables on the local machine running the application.
To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.
- To set the
CONTENT_SAFETY_KEY
environment variable, replaceYOUR_CONTENT_SAFETY_KEY
with one of the keys for your resource. - To set the
CONTENT_SAFETY_ENDPOINT
environment variable, replaceYOUR_CONTENT_SAFETY_ENDPOINT
with the endpoint for your resource.
Important
If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.
For more information about AI services security, see Authenticate requests to Azure AI services.
setx CONTENT_SAFETY_KEY 'YOUR_CONTENT_SAFETY_KEY'
setx CONTENT_SAFETY_ENDPOINT 'YOUR_CONTENT_SAFETY_ENDPOINT'
After you add the environment variables, you might need to restart any running programs that will read the environment variables, including the console window.
Analyze text content
Create a new file in your directory, index.js. Open it in your preferred editor or IDE and paste in the following code. Replace <your text sample>
with the text content you'd like to use.
const ContentSafetyClient = require("@azure-rest/ai-content-safety").default,
{ isUnexpected } = require("@azure-rest/ai-content-safety");
const { AzureKeyCredential } = require("@azure/core-auth");
// Load the .env file if it exists
require("dotenv").config();
async function main() {
// get endpoint and key from environment variables
const endpoint = process.env["CONTENT_SAFETY_ENDPOINT"];
const key = process.env["CONTENT_SAFETY_KEY"];
const credential = new AzureKeyCredential(key);
const client = ContentSafetyClient(endpoint, credential);
// replace with your own sample text string
const text = "<your sample text>";
const analyzeTextOption = { text: text };
const analyzeTextParameters = { body: analyzeTextOption };
const result = await client.path("/text:analyze").post(analyzeTextParameters);
if (isUnexpected(result)) {
throw result;
}
for (let i = 0; i < result.body.categoriesAnalysis.length; i++) {
const textCategoriesAnalysisOutput = result.body.categoriesAnalysis[i];
console.log(
textCategoriesAnalysisOutput.category,
" severity: ",
textCategoriesAnalysisOutput.severity
);
}
}
main().catch((err) => {
console.error("The sample encountered an error:", err);
});
Run the application with the node
command on your quickstart file.
node index.js
Output
Hate severity: 0
SelfHarm severity: 0
Sexual severity: 0
Violence severity: 0
Clean up resources
If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
Related content
- Harm categories
- Configure filters for each category and test on datasets using Content Safety Studio, export the code and deploy.