Handling Content Filters for generating SOAP Notes for a Medical Use Case in Agents Service

Omar Elhanafy 20 Reputation points
2025-12-11T15:28:58.36+00:00

Hey Everyone,

I am using Azure OpenAI API with gpt-5-mini model, and I have a medical use case to transcribe a doctor patient appointment and generate SOAP Notes from it. But since in our case the patient might mention violence or self harm to the doctor and that would lead to triggering the Content Safety Filter.

I had a similar use case as seen here, but since this is not chat based so not sure if including a Safety Preprocessor would work.

Approaches Used:

  • Configuring Self-harm to lowest blocking and trying to enforce through prompting, but still the behavior is not stable where the agent run can enter incomplete status in the case a patient triggers a high level of one of the content filtering categories

What could be the solutions in handling the content filters to allow the agent to detect self harm and act according to required logic

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
{count} votes

1 answer

Sort by: Most helpful
  1. Q&A Assist
    2025-12-11T15:30:12.3466667+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    To handle content filters effectively in your medical use case for generating SOAP notes, consider the following approaches:

    1. Adjust Content Filter Settings: Since you are already configuring the self-harm category to the lowest blocking level, ensure that you also review the settings for other relevant categories like violence. You can use the Azure OpenAI content filter configurability to set the severity levels for prompts and completions separately. This allows you to customize the filtering based on your specific needs.
    2. Implement a Safety Preprocessor: Although you mentioned uncertainty about using a Safety Preprocessor in a non-chat context, it can still be beneficial. A preprocessor can analyze the input for sensitive topics before passing it to the model. If a patient mentions self-harm or violence, the preprocessor can flag this and allow for appropriate handling, such as alerting a healthcare professional or providing resources.
    3. Use Annotate Only Mode: If your use case is approved for modified content filters, consider using the "Annotate only" mode. This mode allows the system to run the content filter without blocking the content, providing annotations for any detected issues. This can help you log instances of self-harm or violence while still allowing the generation of SOAP notes.
    4. Testing and Iteration: Since you are facing stability issues, it may be helpful to conduct iterative testing with different configurations. Monitor how the model responds to various inputs and adjust the severity settings accordingly to find a balance that minimizes incomplete statuses while still ensuring safety.
    5. Consult with Azure Support: If you continue to encounter challenges, reaching out to Azure support for guidance on best practices specific to your use case may provide additional insights and solutions.

    By implementing these strategies, you can enhance the reliability of your agent in handling sensitive content while generating SOAP notes effectively.


    References:

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.