The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies

Achraf Bennis 5 Reputation points
2024-07-06T20:47:11.48+00:00

Hello,

I run gpt-4 to extract chunk-wise facts from focus groups talking about a product. I'm suprised that the run failed on few chunks whereas there is no violent words in those focus groups. This issue hinders the performance of our app at the prod phase. Any help?

Achraf

Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
3,632 questions
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. Amira Bedhiafi 34,101 Reputation points Volunteer Moderator
    2024-07-07T12:49:08.41+00:00

    I checked some old thread and I think it is a bug, but you have 2 alternatives :

    Plan 1:

    Consider modifying the input prompt by adding a description to indicate that the following content is not true, or by pre-processing the prompt to remove or obscure any sensitive words. Please note that this method may not be applicable in all scenarios and should be used with caution.

    Plan 2:

    If the above approach is not effective, you may need to request an opt-out from the content moderation mechanism. If your application scenario will inevitably involve sensitive words, it is recommended to apply directly for this opt-out. This method tends to be more stable and eliminates the need to modify the prompt based on changes in user input.

    To request modifications to content filtering and/or abuse monitoring, submit the form available at https://aka.ms/oai/modifiedaccess.

    The application will be manually reviewed by the Microsoft CSGATE team. Please ensure that you fill out the form carefully. After submission, the review process typically takes approximately 10 business days, though some reviews may take longer.

    More links :

    https://github.com/openai/openai-python/issues/331

    https://learn.microsoft.com/en-us/answers/questions/1626458/badrequesterror

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.