Hi Jason Isaac
Welcome to Microsoft Q&A Forum, thank you for posting your query here!
Azure AI Content Safety: This is a specialized service that helps find and manage harmful content created by users or AI in apps and services. It offers various tools to check text, images, and mixed content for safety, and has features like checking if content is accurate, protecting sensitive material, and adding extra safety layers.
Azure OpenAI Model Content Safety: This is a built-in safety feature for Azure OpenAI models that makes sure their outputs are safe and follow certain standards. It might not have as many detailed options or custom settings as the dedicated Azure AI Content Safety service.
kindly refer below link: What is Azure AI Content Safety
Hope this helps. Do let us know if you any further queries.
If this answers your query, do click Accept Answer
and Yes
for was this answer helpful.
Thank You.