Hello @Premnath S,
Azure Content Safety Filter is enabled by default for Large Language Models (LLMs) and AI services deployed through the Azure platform, particularly when using serverless APIs. This default configuration applies to four main harm categories: Hate and Fairness, Self-Harm, Sexual Content, and Violence. These categories are enforced automatically to ensure a baseline level of safety and content moderation for end users.
However, other advanced categories such as Prompt Shields, Protected Material, and Groundedness are not included in the default configuration.
To manage these settings more granularly, Azure offers configuration controls at the resource level, such as through Azure AI Studio or AI Foundry. This allows teams to customize which categories of content filtering are active, depending on the sensitivity of the application or target audience.
For the most current information and capabilities, it's recommended to regularly check the official Azure documentation and "What’s New" updates in the Azure AI portal, as the platform continues to evolve and expand its safety feature set.
Please refer this Default content safety policies for Azure AI Model Inference.
Default content safety policies.
I Hope this helps. Do let me know if you have any further queries.
If this answers your query, please do click Accept Answer
and Yes
for was this answer helpful.
Thank you!