I checked some old thread and I think it is a bug, but you have 2 alternatives :
Plan 1:
Consider modifying the input prompt by adding a description to indicate that the following content is not true, or by pre-processing the prompt to remove or obscure any sensitive words. Please note that this method may not be applicable in all scenarios and should be used with caution.
Plan 2:
If the above approach is not effective, you may need to request an opt-out from the content moderation mechanism. If your application scenario will inevitably involve sensitive words, it is recommended to apply directly for this opt-out. This method tends to be more stable and eliminates the need to modify the prompt based on changes in user input.
To request modifications to content filtering and/or abuse monitoring, submit the form available at https://aka.ms/oai/modifiedaccess.
The application will be manually reviewed by the Microsoft CSGATE team. Please ensure that you fill out the form carefully. After submission, the review process typically takes approximately 10 business days, though some reviews may take longer.
More links :
https://github.com/openai/openai-python/issues/331
https://learn.microsoft.com/en-us/answers/questions/1626458/badrequesterror