اقرأ باللغة الإنجليزية تحرير

مشاركة عبر


What's new in Azure AI Content Safety

Learn what's new in the service. These items might be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.

October 2024

Upcoming deprecations

To align with Content Safety versioning and lifecycle management policies, the following versions are scheduled for deprecation:

  • Effective January 28, 2024: All versions except 2024-09-01, 2024-09-15-preview, and 2024-09-30-preview will be deprecated and no longer supported. We encourage users to transition to the latest available versions to continue receiving full support and updates. If you have any questions about this process or need assistance with the transition, please reach out to our support team.

September 2024

Multimodal analysis (preview)

The Multimodal API analyzes materials containing both image content and text content to help make applications and services safer from harmful user-generated or AI-generated content. Analyzing an image and its associated text content together can preserve context and provide a more comprehensive understanding of the content. Follow the quickstart to get started.

Protected material detection for code (preview)

The Protected material code API flags protected code content (from known GitHub repositories, including software libraries, source code, algorithms, and other proprietary programming content) that might be output by large language models. Follow the quickstart to get started.

تنبيه

The content safety service's code scanner/indexer is only current through November 6, 2021. Code that was added to GitHub after this date will not be detected. Use your own discretion when using Protected Material for Code to detect recent bodies of code.

Groundedness correction (preview)

The groundedness detection API includes a correction feature that automatically corrects any detected ungroundedness in the text based on the provided grounding sources. When the correction feature is enabled, the response includes a corrected Text field that presents the corrected text aligned with the grounding sources. Follow the quickstart to get started.

August 2024

New features are GA

The Prompt Shields API and Protected Material for text API are now generally available (GA). Follow a quickstart to try them out.

July 2024

Custom categories (standard) API public preview

The custom categories (standard) API lets you create and train your own custom content categories and scan text for matches. See Custom categories to learn more.

May 2024

Custom categories (rapid) API public preview

The custom categories (rapid) API lets you quickly define emerging harmful content patterns and scan text and images for matches. See Custom categories to learn more.

March 2024

Prompt Shields public preview

Previously known as Jailbreak risk detection, this updated feature detects prompt attacks, in which users deliberately exploit system vulnerabilities to elicit unauthorized behavior from large language model. Prompt Shields analyzes both direct user prompt attacks and indirect attacks which are embedded in input documents or images. See Prompt Shields to learn more.

Groundedness detection public preview

The Groundedness detection API detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. Ungroundedness describes instances where the LLMs produce information that is non-factual or inaccurate according to what was present in the source materials. See Groundedness detection to learn more.

January 2024

Content Safety SDK GA

The Azure AI Content Safety service is now generally available through the following client library SDKs:

هام

The public preview versions of the Azure AI Content Safety SDKs will be deprecated by March 31, 2024. Please update your applications to use the GA versions.

November 2023

Jailbreak risk and protected material detection (preview)

The new Jailbreak risk detection and protected material detection APIs let you mitigate some of the risks when using generative AI.

  • Jailbreak risk detection scans text for the risk of a jailbreak attack on a Large Language Model. Quickstart
  • Protected material text detection scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content). Quickstart

Jailbreak risk and protected material detection are only available in select regions. See Region availability.

October 2023

Azure AI Content Safety is generally available (GA)

The Azure AI Content Safety service is now generally available as a cloud service.

  • The service is available in many more Azure regions. See the Overview for a list.
  • The return formats of the Analyze APIs have changed. See the Quickstarts for the latest examples.
  • The names and return formats of several other APIs have changed. See the Migration guide for a full list of breaking changes. Other guides and quickstarts now reflect the GA version.

Azure AI Content Safety Java and JavaScript SDKs

The Azure AI Content Safety service is now available through Java and JavaScript SDKs. The SDKs are available on Maven and npm respectively. Follow a quickstart to get started.

July 2023

Azure AI Content Safety C# SDK

The Azure AI Content Safety service is now available through a C# SDK. The SDK is available on NuGet. Follow a quickstart to get started.

May 2023

Azure AI Content Safety public preview

Azure AI Content Safety detects material that is potentially offensive, risky, or otherwise undesirable. This service offers state-of-the-art text and image models that detect problematic content. Azure AI Content Safety helps make applications and services safer from harmful user-generated and AI-generated content. Follow a quickstart to get started.

Azure AI services updates

Azure update announcements for Azure AI services