False Positives in Azure Content Safety Image Moderation (Paid Plan)

Naresh Khuriwal 0 Reputation points
2024-09-24T16:59:22.3866667+00:00

We are using the paid version of Azure AI Content Safety to moderate images, but we’re encountering multiple false positives—images flagged as inappropriate when they shouldn’t be.

Is there a way to fine-tune or adjust the model to reduce these false positives? Are there specific settings or best practices to improve moderation accuracy while using the paid plan?

Despite being on a paid plan, we’re facing issues with Azure Content Safety flagging legitimate images incorrectly. We’d like to reduce these false positives without losing accuracy for actual harmful content. Has anyone else experienced this, and how did you resolve it? Any recommendations or steps to fine-tune the service would be greatly appreciated.

Thank you!

Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
2,866 questions
{count} votes

1 answer

Sort by: Most helpful
  1. romungi-MSFT 46,476 Reputation points Microsoft Employee
    2024-09-25T06:56:47.0166667+00:00

    @Naresh Khuriwal I understand that the service is flagging some of your images incorrectly with the current model. For such scenarios, the recommendation is to customize your severity settings. See this page for details about setting the same with content safety standalone API.

    If you are still seeing issues after applying the severity settings, you can raise a support case to share your data i.e. images to the team to check how the service can be improved for such false positives.

    If this answers your query, do click Accept Answer and Yes for was this answer helpful. And, if you have any further query do let us know.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.