Regarding Azure OpenAI service optout form

Eight 20 Reputation points
2025-06-11T09:39:39.85+00:00

Hi, I have question regarding Azure OpenAI service "Azure OpenAI Limited Access Review: Modified Abuse Monitoring".

I've seen that opting out is recommended, but I want to avoid having my company input data stored for up to 30 days, so I'm thinking of applying for an opt-out.

However, I'm unable to apply because I don't know the details of the following.

If you have any knowledge, please let me know.

Question 1: When I fill out the form below, in item 16 it says, "My organization will implement systems and measures to ensure that the use of Azure OpenAI complies with the Microsoft Generative AI Services Code of Conduct." But I don't know what kind of systems and measures I should specifically implement.

Question 2: Under article 16, there is a checkbox that says, "I acknowledge that I understand the risks of turning off abuse monitoring." But I don't know what kind of risks Microsoft is assuming, so I don't know whether I should check it.

Thank you in advance.

■Azure OpenAI Limited Access Review: Modified Abuse Monitoring

https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUOE9MUTFMUlpBNk5IQlZWWkcyUEpWWEhGOCQlQCN0PWcu

Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
4,081 questions
0 comments No comments
{count} votes

Accepted answer
  1. Ravada Shivaprasad 535 Reputation points Microsoft External Staff Moderator
    2025-06-11T21:59:28.3433333+00:00

    Hi Eight

    When responding to item 16 of the Azure OpenAI Limited Access Review form, your organization is being asked to confirm that it has implemented appropriate systems and measures to ensure responsible use of Azure OpenAI services in accordance with Microsoft’s Generative AI Services Code of Conduct. This means you should have a comprehensive internal framework in place to monitor, manage, and mitigate risks associated with generative AI usage. Specifically, this includes deploying content moderation mechanisms—such as filters or classifiers—to detect and block harmful, biased, or inappropriate outputs. You should also maintain detailed logging and auditing of prompt and completion activity to identify potential misuse and restrict access to the service through role-based access controls and secure authentication. Additionally, it’s important to educate users within your organization about responsible AI practices and the Code of Conduct, and to establish clear escalation paths or review boards for handling edge cases or violations. Finally, having a documented incident response plan is essential to quickly address any misuse or unintended behavior, including the ability to disable access or roll back deployments if necessary.

    As for the checkbox acknowledging the risks of turning off abuse monitoring, Microsoft is asking you to accept full responsibility for detecting and managing misuse internally. By opting out, you are disabling Microsoft’s default abuse monitoring system, which includes both automated tools and human reviewers that help identify harmful or policy-violating content. Without this safeguard, your organization assumes the risk of undetected misuse, which could lead to the generation of offensive, unsafe, or non-compliant content. This could expose your company to reputational damage, legal liability, or regulatory scrutiny—especially if your use case involves sensitive data or operates in a regulated industry. Therefore, checking this box signifies that you understand and accept these risks, and that your internal controls are robust enough to manage them independently.

    Refrernce : Azure Blog – Tools for Safer Generative AI , Azure Documentation – Content Filter Severity Levels

    Hope it helps!

    Thanks

    1 person found this answer helpful.

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.