Code of conduct for Azure OpenAI Service
The following Code of Conduct defines the requirements that all Azure OpenAI Service implementations must adhere to in good faith. This code of conduct is in addition to the Acceptable Use Policy in the Microsoft Online Services Terms.
Azure OpenAI Service is a Limited Access service that requires registration and is only available to approved enterprise customers and partners. Customers who wish to use this service are required to register through this form. To learn more, see Limited Access to Azure OpenAI Service.
Responsible AI mitigation requirements
Integrations with Azure OpenAI Service must, as appropriate for the application and circumstances:
- Implement meaningful human oversight
- Implement technical and operational measures to detect fraudulent user behavior in account creation and during use.
- Implement strong technical limits on inputs and outputs to reduce the likelihood of misuse beyond the application's intended purpose
- Test applications thoroughly to find and mitigate undesirable behaviors
- Establish feedback channels
- Implement additional scenario-specific mitigations
To learn more, see the Azure OpenAI transparency note.
Integrations with Azure OpenAI Service must not:
- be used in any way that violates Microsoft’s Acceptable Use Policy, including but not limited to any use prohibited by law, regulation, government order, or decree, or any use that violates the rights of others;
- be used in any way that is inconsistent with this code of conduct, including the Limited Access requirements, the Responsible AI mitigation requirements, and the Content requirements;
- exceed the use case(s) you identified to Microsoft in connection with your request to use the service;
- interact with individuals under the age of consent in any way that could result in exploitation or manipulation or is otherwise prohibited by law or regulation;
- generate or interact with content prohibited in this Code of Conduct;
- be presented alongside or monetize content prohibited in this Code of Conduct;
- make decisions without appropriate human oversight if your application may have a consequential impact on any individual’s legal position, financial position, life opportunities, employment opportunities, human rights, or result in physical or psychological injury to an individual;
- infer protected characteristics about people or personally identifiable information without their explicit consent unless if used in a lawful manner by a law enforcement entity, court, or government official subject to judicial oversight in a jurisdiction that maintains a fair and independent judiciary;
- be used for unlawful tracking, stalking, or harassment of a person;
- be used to identify or verify individual identities based on media containing people’s faces or otherwise physical, biological, or behavioral characteristics, or as otherwise prohibited in this Code of Conduct;
- be used for chatbots that (i) are erotic, romantic, or used for companionship purposes, or which are otherwise prohibited by this Code of Conduct; (ii) are personas of specific people without their explicit consent; (iii) claim to have special wisdom/insight/knowledge, unless very clearly labeled as being for entertainment purposes only; or (iv) enable end users to create their own chatbots without oversight.
- be used to infer gender or age from images of people, or
- attempt to infer people’s emotional states from their facial expressions or facial movements; or
- without the individual’s valid consent, be used for ongoing surveillance or real-time or near real-time identification or persistent tracking of the individual.
We prohibit the use of our service for processing content or generating content that can inflict harm on individuals or society. Our content policies are intended to improve the safety of our platform
These content requirements apply to the output of all models developed by OpenAI and hosted in Azure OpenAI, such as GPT-3, GPT-4, GPT-4 Turbo with Vision, Codex models, DALL·E 2, DALL·E 3, and Whisper, and includes content provided as input to the service and content generated as output from the service.
Exploitation and Abuse
Child sexual exploitation and abuse
Azure OpenAI Service prohibits content that describes, features, or promotes child sexual exploitation or abuse, whether or not prohibited by law. This includes sexual content involving a child or that sexualizes a child.
Azure OpenAI Service prohibits content that describes or is used for purposes of grooming of children. Grooming is the act of an adult building a relationship with a child for the purposes of exploitation, especially sexual exploitation. This includes communicating with a child for the purpose of sexual exploitation, trafficking, or other forms of exploitation.
Non-consensual intimate content
Azure OpenAI Service prohibits content that describes, features, or promotes non-consensual intimate activity.
Azure OpenAI Service prohibits content that describes, features, or promotes, or is used for, purposes of solicitation of commercial sexual activity and sexual services. This includes encouragement and coordination of real sexual activity.
Azure OpenAI Service prohibits content describing or used for purposes of human trafficking. This includes the recruitment of individuals, facilitation of transport, and payment for, and the promotion of, exploitation of people such as forced labor, domestic servitude, sexual slavery, forced marriages, and forced medical procedures.
Suicide and Self-Injury
Azure OpenAI Service prohibits content that describes, praises, supports, promotes, glorifies, encourages and/or instructs individual(s) on self-injury or to take their life.
Azure OpenAI Service prohibits identification or verification of individual identities using media containing people’s faces by any user, including by or for state or local police in the United States.
Azure OpenAI Service prohibits the inferencing of a person’s emotional state based on facial expressions. This includes inferring internal emotions such as anger, disgust, happiness, sadness, surprise, fear or other terms commonly used to describe the emotional state of a person. Azure OpenAI Service also prohibits the inference of gender, age, or facial expressions, or inference of the presence of facial hair, hair, or makeup.
Violent Content and Conduct
Graphic violence and gore
Azure OpenAI Service prohibits content that describes, features, or promotes graphic violence or gore.
Terrorism and Violent Extremism
Azure OpenAI Service prohibits content that depicts an act of terrorism; praises, or supports a terrorist organization, terrorist actor, or violent terrorist ideology; encourages terrorist activities; offers aid to terrorist organizations or terrorist causes; or aids in recruitment to a terrorist organization.
Violent Threats, Incitement, and Glorification of Violence
Azure OpenAI Service prohibits content advocating or promoting violence toward others through violent threats or incitement.
Hate speech and discrimination
Azure OpenAI Service prohibits content that attacks, denigrates, intimidates, degrades, targets, or excludes individuals or groups on the basis of traits such as actual or perceived race, ethnicity, national origin, gender, gender identity, sexual orientation, religious affiliation, age, disability status, caste, or any other characteristic that is associated with systemic prejudice or marginalization.
Bullying and harassment
Azure OpenAI Service prohibits content that targets individual(s) or group(s) with threats, intimidation, insults, degrading or demeaning language or images, promotion of physical harm, or other abusive behavior such as stalking.
Deception, disinformation, and inauthentic activity
Azure OpenAI Service prohibits content that is intentionally deceptive and likely to adversely affect the public interest, including deceptive or untrue content relating to health, safety, election integrity, or civic participation. Azure OpenAI Service also prohibits inauthentic interactions, such as fake accounts, automated inauthentic activity, impersonation to gain unauthorized information or privileges, and claims to be from any person, company, government body, or entity without explicit permission to make that representation.
Active malware or exploits
Content that supports unlawful active attacks or malware campaigns that cause technical harms, such as delivering malicious executables, organizing denial of service attacks, or managing command and control servers.
Additional content policies
We prohibit the use of our Azure OpenAI Service for scenarios in which the system is likely to generate undesired content due to limitations in the models or scenarios in which the system cannot be applied in a way that properly manages potential negative consequences to people and society. Without limiting the foregoing restriction, Microsoft reserves the right to revise and expand the above Content requirements to address specific harms to people and society.
This includes prohibiting content that is sexually graphic, including consensual pornographic content and intimate descriptions of sexual acts.
We may at times limit our service's ability to respond to particular topics, such as probing for personal information or seeking opinions on sensitive topics or current events.
We prohibit the use of Azure OpenAI Service for activities that significantly harm other individuals, organizations, or society, including but not limited to use of the service for purposes in conflict with the applicable Azure Legal Terms and the Microsoft Product Terms.
If you suspect that Azure OpenAI Service is being used in a manner that is abusive or illegal, infringes on your rights or the rights of other people, or violates these policies, you can report it at the Report Abuse Portal.