Content filter and policy violation issues when using openAI API

syyu 5 Reputation points
2024-09-08T08:37:52.7866667+00:00

Too frequent rule violations occur when using openAI services.

Even prompts that are nothing special, such as “Make it like the picture” or “Make it more exciting,” do not receive answers because against the rules.

Do you know a solution to this?

The following error occurs:

openai.BadRequestError: Error code: 400 - {'error': {'code': 'content_policy_violation', 'inner_error': {'code': 'ResponsibleAIPolicyViolation', 'content_filter_results': {'jailbrea

k': {'detected': False, 'filtered': False}}}, 'message': 'Your request was rejected as a result of our safety system. Your prompt may contain text that is not allowed by our safety system

.', 'type': 'invalid_request_error'}}

openai.BadRequestError: Error code: 400 - {'error': {'message': "The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766", 'type': None, 'param': 'prompt', 'cod e': 'content_filter', 'status': 400, 'innererror': {'code': 'ResponsibleAIPolicyViolation', 'content_filter_result': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filte red': False, 'severity': 'safe'}, 'sexual': {'filtered': True, 'severity': 'medium'}, 'violence': {'filtered': False, 'severity': 'safe'}}}}}

Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
3,275 questions
{count} vote

1 answer

Sort by: Most helpful
  1. Axel B Andersen 146 Reputation points
    2024-09-20T10:07:41.5233333+00:00

    This issue has been resolved. Turns out that 2024-05-13 sometimes omits the content parameter of the message. Updating to 2024-08-06 removed that issue. Additionally, the filter still evaluated, that the initial prompt was trying to jailbreak the model. Changing the prompt removed the jailbreak issue.

    The prompt that generated a jailbreak warning one deployment did not generate a jailbreak warning on another deployment. It seems to be important to test all prompts on the specific deployments they should run on.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.