กิจกรรม
17 มี.ค. 21 - 21 มี.ค. 10
แอปอัจฉริยะ เข้าร่วมชุด meetup เพื่อสร้างโซลูชัน AI ที่ปรับขนาดได้ตามกรณีการใช้งานจริงกับนักพัฒนาและผู้เชี่ยวชาญร่วมกัน
ลงทะเบียนตอนนี้เบราว์เซอร์นี้ไม่ได้รับการสนับสนุนอีกต่อไป
อัปเกรดเป็น Microsoft Edge เพื่อใช้ประโยชน์จากคุณลักษณะล่าสุด เช่น การอัปเดตความปลอดภัยและการสนับสนุนด้านเทคนิค
Get started with the Content Safety Studio, REST API, or client SDKs to do basic text moderation. The Azure AI Content Safety service provides you with AI algorithms for flagging objectionable content. Follow these steps to try it out.
For more information on text moderation, see the Harm categories concept page. For API input limits, see the Input requirements section of the Overview.
ข้อควรระวัง
The sample data and code may contain offensive content. User discretion is advised.
The following section walks through a sample request with cURL. Paste the command below into a text editor, and make the following changes.
<endpoint>
with the endpoint URL associated with your resource.<your_subscription_key>
with one of the keys that come with your resource."text"
field in the body with your own text you'd like to analyze.
curl --location --request POST '<endpoint>/contentsafety/text:analyze?api-version=2024-09-01' \
--header 'Ocp-Apim-Subscription-Key: <your_subscription_key>' \
--header 'Content-Type: application/json' \
--data-raw '{
"text": "I hate you",
"categories": ["Hate", "Sexual", "SelfHarm", "Violence"],
"blocklistNames": ["string"],
"haltOnBlocklistHit": true,
"outputType": "FourSeverityLevels"
}'
The below fields must be included in the url:
Name | Required | Description | Type |
---|---|---|---|
API Version | Required | This is the API version to be checked. The current version is: api-version=2024-09-01. Example: <endpoint>/contentsafety/text:analyze?api-version=2024-09-01 |
String |
The parameters in the request body are defined in this table:
Name | Required | Description | Type |
---|---|---|---|
text | Required | This is the raw text to be checked. Other non-ascii characters can be included. | String |
categories | Optional | This is assumed to be an array of category names. See the Harm categories guide for a list of available category names. If no categories are specified, all four categories are used. We use multiple categories to get scores in a single request. | String |
blocklistNames | Optional | Text blocklist Name. Only support following characters: 0-9 A-Z a-z - . _ ~ . You could attach multiple list names here. |
Array |
haltOnBlocklistHit | Optional | When set to true , further analyses of harmful content won't be performed in cases where blocklists are hit. When set to false , all analyses of harmful content will be performed, whether or not blocklists are hit. |
Boolean |
outputType | Optional | "FourSeverityLevels" or "EightSeverityLevels" . Output severities in four or eight levels, the value can be 0,2,4,6 or 0,1,2,3,4,5,6,7 . |
String |
See the following sample request body:
{
"text": "I hate you",
"categories": ["Hate", "Sexual", "SelfHarm", "Violence"],
"blocklistNames": ["array"],
"haltOnBlocklistHit": false,
"outputType": "FourSeverityLevels"
}
Open a command prompt window, paste in the edited cURL command, and run it.
You should see the text moderation results displayed as JSON data in the console output. For example:
{
"blocklistsMatch": [
{
"blocklistName": "string",
"blocklistItemId": "string",
"blocklistItemText": "string"
}
],
"categoriesAnalysis": [
{
"category": "Hate",
"severity": 2
},
{
"category": "SelfHarm",
"severity": 0
},
{
"category": "Sexual",
"severity": 0
},
{
"category": "Violence",
"severity": 0
}
]
}
The JSON fields in the output are defined here:
Name | Description | Type |
---|---|---|
categoriesAnalysis | Each output class that the API predicts. Classification can be multi-labeled. For example, when a text sample is run through the text moderation model, it could be classified as both sexual content and violence. Harm categories | String |
Severity | The higher the severity of input content, the larger this value is. | Integer |
Reference documentation | Library source code | Package (NuGet) | Samples
Create a new C# application.
Open Visual Studio, and under Get started select Create a new project. Set the template filters to C#/All Platforms/Console. Select Console App (command-line application that can run on .NET on Windows, Linux and macOS) and choose Next. Update the project name to ContentSafetyQuickstart and choose Next. Select .NET 6.0 or above, and choose Create to create the project.
Once you've created a new project, install the client SDK by right-clicking on the project solution in the Solution Explorer and selecting Manage NuGet Packages. In the package manager that opens select Browse, and search for Azure.AI.ContentSafety
. Select Install.
In this example, you'll write your credentials to environment variables on the local machine running the application.
To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.
CONTENT_SAFETY_KEY
environment variable, replace YOUR_CONTENT_SAFETY_KEY
with one of the keys for your resource.CONTENT_SAFETY_ENDPOINT
environment variable, replace YOUR_CONTENT_SAFETY_ENDPOINT
with the endpoint for your resource.ข้อสำคัญ
Use API keys with caution. Don't include the API key directly in your code, and never post it publicly. If you use an API key, store it securely in Azure Key Vault. For more information about using API keys securely in your apps, see API keys with Azure Key Vault.
For more information about AI services security, see Authenticate requests to Azure AI services.
export CONTENT_SAFETY_KEY='YOUR_CONTENT_SAFETY_KEY'
export CONTENT_SAFETY_ENDPOINT='YOUR_CONTENT_SAFETY_ENDPOINT'
After you add the environment variables, run source ~/.bashrc
from your console window to make the changes effective.
From the project directory, open the Program.cs file that was created previously. Paste in the following code:
using System;
using Azure.AI.ContentSafety;
namespace Azure.AI.ContentSafety.Dotnet.Sample
{
class ContentSafetySampleAnalyzeText
{
public static void AnalyzeText()
{
// retrieve the endpoint and key from the environment variables created earlier
string endpoint = Environment.GetEnvironmentVariable("CONTENT_SAFETY_ENDPOINT");
string key = Environment.GetEnvironmentVariable("CONTENT_SAFETY_KEY");
ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
string text = "Your input text";
var request = new AnalyzeTextOptions(text);
Response<AnalyzeTextResult> response;
try
{
response = client.AnalyzeText(request);
}
catch (RequestFailedException ex)
{
Console.WriteLine("Analyze text failed.\nStatus code: {0}, Error code: {1}, Error message: {2}", ex.Status, ex.ErrorCode, ex.Message);
throw;
}
Console.WriteLine("\nAnalyze text succeeded:");
Console.WriteLine("Hate severity: {0}", response.Value.CategoriesAnalysis.FirstOrDefault(a => a.Category == TextCategory.Hate)?.Severity ?? 0);
Console.WriteLine("SelfHarm severity: {0}", response.Value.CategoriesAnalysis.FirstOrDefault(a => a.Category == TextCategory.SelfHarm)?.Severity ?? 0);
Console.WriteLine("Sexual severity: {0}", response.Value.CategoriesAnalysis.FirstOrDefault(a => a.Category == TextCategory.Sexual)?.Severity ?? 0);
Console.WriteLine("Violence severity: {0}", response.Value.CategoriesAnalysis.FirstOrDefault(a => a.Category == TextCategory.Violence)?.Severity ?? 0);
}
static void Main()
{
AnalyzeText();
}
}
}
Replace "Your input text"
with the text content you'd like to use.
Build and run the application by selecting Start Debugging from the Debug menu at the top of the IDE window (or press F5).
Reference documentation | Library source code | Package (PyPI) | Samples |
pip --version
on the command line. Get pip by installing the latest version of Python.In this example, you'll write your credentials to environment variables on the local machine running the application.
To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.
CONTENT_SAFETY_KEY
environment variable, replace YOUR_CONTENT_SAFETY_KEY
with one of the keys for your resource.CONTENT_SAFETY_ENDPOINT
environment variable, replace YOUR_CONTENT_SAFETY_ENDPOINT
with the endpoint for your resource.ข้อสำคัญ
Use API keys with caution. Don't include the API key directly in your code, and never post it publicly. If you use an API key, store it securely in Azure Key Vault. For more information about using API keys securely in your apps, see API keys with Azure Key Vault.
For more information about AI services security, see Authenticate requests to Azure AI services.
export CONTENT_SAFETY_KEY='YOUR_CONTENT_SAFETY_KEY'
export CONTENT_SAFETY_ENDPOINT='YOUR_CONTENT_SAFETY_ENDPOINT'
After you add the environment variables, run source ~/.bashrc
from your console window to make the changes effective.
The following section walks through a sample request with the Python SDK.
Open a command prompt, navigate to your project folder, and create a new file named quickstart.py.
Run this command to install the Azure AI Content Safety library:
pip install azure-ai-contentsafety
Copy the following code into quickstart.py:
import os
from azure.ai.contentsafety import ContentSafetyClient
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory
def analyze_text():
# analyze text
key = os.environ["CONTENT_SAFETY_KEY"]
endpoint = os.environ["CONTENT_SAFETY_ENDPOINT"]
# Create an Azure AI Content Safety client
client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
# Contruct request
request = AnalyzeTextOptions(text="Your input text")
# Analyze text
try:
response = client.analyze_text(request)
except HttpResponseError as e:
print("Analyze text failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
hate_result = next(item for item in response.categories_analysis if item.category == TextCategory.HATE)
self_harm_result = next(item for item in response.categories_analysis if item.category == TextCategory.SELF_HARM)
sexual_result = next(item for item in response.categories_analysis if item.category == TextCategory.SEXUAL)
violence_result = next(item for item in response.categories_analysis if item.category == TextCategory.VIOLENCE)
if hate_result:
print(f"Hate severity: {hate_result.severity}")
if self_harm_result:
print(f"SelfHarm severity: {self_harm_result.severity}")
if sexual_result:
print(f"Sexual severity: {sexual_result.severity}")
if violence_result:
print(f"Violence severity: {violence_result.severity}")
if __name__ == "__main__":
analyze_text()
Replace "Your input text"
with the text content you'd like to use.
Then run the application with the python
command on your quickstart file.
python quickstart.py
Reference documentation | Library source code | Artifact (Maven) | Samples
Create a new Gradle project.
In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.
mkdir myapp && cd myapp
Run the gradle init
command from your working directory. This command will create essential build files for Gradle, including build.gradle.kts, which is used at runtime to create and configure your application.
gradle init --type basic
When prompted to choose a DSL, select Kotlin.
From your working directory, run the following command to create a project source folder:
mkdir -p src/main/java
Navigate to the new folder and create a file called ContentSafetyQuickstart.java.
This quickstart uses the Gradle dependency manager. You can find the client library and information for other dependency managers on the Maven Central Repository.
Locate build.gradle.kts and open it with your preferred IDE or text editor. Then copy in the following build configuration. This configuration defines the project as a Java application whose entry point is the class ContentSafetyQuickstart. It imports the Azure AI Vision library.
plugins {
java
application
}
application {
mainClass.set("ContentSafetyQuickstart")
}
repositories {
mavenCentral()
}
dependencies {
implementation(group = "com.azure", name = "azure-ai-contentsafety", version = "1.0.0")
}
In this example, you'll write your credentials to environment variables on the local machine running the application.
To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.
CONTENT_SAFETY_KEY
environment variable, replace YOUR_CONTENT_SAFETY_KEY
with one of the keys for your resource.CONTENT_SAFETY_ENDPOINT
environment variable, replace YOUR_CONTENT_SAFETY_ENDPOINT
with the endpoint for your resource.ข้อสำคัญ
Use API keys with caution. Don't include the API key directly in your code, and never post it publicly. If you use an API key, store it securely in Azure Key Vault. For more information about using API keys securely in your apps, see API keys with Azure Key Vault.
For more information about AI services security, see Authenticate requests to Azure AI services.
export CONTENT_SAFETY_KEY='YOUR_CONTENT_SAFETY_KEY'
export CONTENT_SAFETY_ENDPOINT='YOUR_CONTENT_SAFETY_ENDPOINT'
After you add the environment variables, run source ~/.bashrc
from your console window to make the changes effective.
Open ContentSafetyQuickstart.java in your preferred editor or IDE and paste in the following code. Replace <your text sample>
with the text content you'd like to use.
import com.azure.ai.contentsafety.ContentSafetyClient;
import com.azure.ai.contentsafety.ContentSafetyClientBuilder;
import com.azure.ai.contentsafety.models.AnalyzeTextOptions;
import com.azure.ai.contentsafety.models.AnalyzeTextResult;
import com.azure.ai.contentsafety.models.TextCategoriesAnalysis;
import com.azure.core.credential.KeyCredential;
import com.azure.core.util.Configuration;
public class ContentSafetyQuickstart {
public static void main(String[] args) {
// get endpoint and key from environment variables
String endpoint = System.getenv("CONTENT_SAFETY_ENDPOINT");
String key = System.getenv("CONTENT_SAFETY_KEY");
ContentSafetyClient contentSafetyClient = new ContentSafetyClientBuilder()
.credential(new KeyCredential(key))
.endpoint(endpoint).buildClient();
AnalyzeTextResult response = contentSafetyClient.analyzeText(new AnalyzeTextOptions("<your text sample>"));
for (TextCategoriesAnalysis result : response.getCategoriesAnalysis()) {
System.out.println(result.getCategory() + " severity: " + result.getSeverity());
}
}
}
Navigate back to the project root folder, and build the app with:
gradle build
Then, run it with the gradle run
command:
gradle run
Hate severity: 0
SelfHarm severity: 0
Sexual severity: 0
Violence severity: 0
Reference documentation | Library source code | Package (npm) | Samples |
Create a new Node.js application. In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.
mkdir myapp && cd myapp
Run the npm init
command to create a node application with a package.json
file.
npm init
Install the @azure-rest/ai-content-safety
npm package:
npm install @azure-rest/ai-content-safety
Also install the dotenv
module to use environment variables:
npm install dotenv
Your app's package.json
file will be updated with the dependencies.
In this example, you'll write your credentials to environment variables on the local machine running the application.
To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.
CONTENT_SAFETY_KEY
environment variable, replace YOUR_CONTENT_SAFETY_KEY
with one of the keys for your resource.CONTENT_SAFETY_ENDPOINT
environment variable, replace YOUR_CONTENT_SAFETY_ENDPOINT
with the endpoint for your resource.ข้อสำคัญ
Use API keys with caution. Don't include the API key directly in your code, and never post it publicly. If you use an API key, store it securely in Azure Key Vault. For more information about using API keys securely in your apps, see API keys with Azure Key Vault.
For more information about AI services security, see Authenticate requests to Azure AI services.
export CONTENT_SAFETY_KEY='YOUR_CONTENT_SAFETY_KEY'
export CONTENT_SAFETY_ENDPOINT='YOUR_CONTENT_SAFETY_ENDPOINT'
After you add the environment variables, run source ~/.bashrc
from your console window to make the changes effective.
Create a new file in your directory, index.js. Open it in your preferred editor or IDE and paste in the following code. Replace <your text sample>
with the text content you'd like to use.
const ContentSafetyClient = require("@azure-rest/ai-content-safety").default,
{ isUnexpected } = require("@azure-rest/ai-content-safety");
const { AzureKeyCredential } = require("@azure/core-auth");
// Load the .env file if it exists
require("dotenv").config();
async function main() {
// get endpoint and key from environment variables
const endpoint = process.env["CONTENT_SAFETY_ENDPOINT"];
const key = process.env["CONTENT_SAFETY_KEY"];
const credential = new AzureKeyCredential(key);
const client = ContentSafetyClient(endpoint, credential);
// replace with your own sample text string
const text = "<your sample text>";
const analyzeTextOption = { text: text };
const analyzeTextParameters = { body: analyzeTextOption };
const result = await client.path("/text:analyze").post(analyzeTextParameters);
if (isUnexpected(result)) {
throw result;
}
for (let i = 0; i < result.body.categoriesAnalysis.length; i++) {
const textCategoriesAnalysisOutput = result.body.categoriesAnalysis[i];
console.log(
textCategoriesAnalysisOutput.category,
" severity: ",
textCategoriesAnalysisOutput.severity
);
}
}
main().catch((err) => {
console.error("The sample encountered an error:", err);
});
Run the application with the node
command on your quickstart file.
node index.js
Hate severity: 0
SelfHarm severity: 0
Sexual severity: 0
Violence severity: 0
If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
กิจกรรม
17 มี.ค. 21 - 21 มี.ค. 10
แอปอัจฉริยะ เข้าร่วมชุด meetup เพื่อสร้างโซลูชัน AI ที่ปรับขนาดได้ตามกรณีการใช้งานจริงกับนักพัฒนาและผู้เชี่ยวชาญร่วมกัน
ลงทะเบียนตอนนี้การฝึกอบรม
โมดูล
Use AI responsibly with Azure AI Content Safety - Training
As the amount of user-generated online content increases, so does the need to ensure harmful material is moderated effectively. Azure AI Content Safety resource includes features to help organizations moderate and manage both user-generated and AI-generated content.
เอกสาร
Harm categories in Azure AI Content Safety - Azure AI services
Learn about the different content moderation flags and severity levels that the Azure AI Content Safety service returns.
Quickstart: Analyze image content - Azure AI services
Get started using Azure AI Content Safety to analyze image content for objectionable material.
Quickstart: Custom categories (preview) - Azure AI services
Use the custom categories API to create your own content categories and train the Content Safety model for your use case.