次の方法で共有


Code of conduct for Azure OpenAI Service

The following Code of Conduct defines the requirements that all Azure OpenAI Service implementations must adhere to in good faith. This code of conduct is in addition to the Acceptable Use Policy in the Microsoft Online Services Terms.

Access requirements

Azure OpenAI Service is a Limited Access service that requires registration and is only available to approved enterprise customers and partners. Customers who wish to use this service are required to register through this form. To learn more, see Limited Access to Azure OpenAI Service.

Responsible AI mitigation requirements

Integrations with Azure OpenAI Service must, as appropriate for the application and circumstances:

  • Implement meaningful human oversight
  • Implement technical and operational measures to detect fraudulent user behavior in account creation and during use
  • Implement strong technical limits on inputs and outputs to reduce the likelihood of misuse beyond the application's intended purpose
  • Test applications thoroughly to find and mitigate undesirable behaviors
  • Establish feedback channels
  • Implement additional scenario-specific mitigations

To learn more, see the Azure OpenAI transparency note.

Integrations with Azure OpenAI Service must not:

  • Be used in any way that violates Microsoft’s Acceptable Use Policy, including but not limited to any use prohibited by law, regulation, government order, or decree, or any use that violates the rights of others;
  • Be used in any way that is inconsistent with this code of conduct, including the Limited Access requirements, the Responsible AI mitigation requirements, and the Content requirements;
  • Exceed the use case(s) you identified to Microsoft in connection with your request to use the service;
  • Interact with individuals under the age of consent in any way that could result in exploitation or manipulation or is otherwise prohibited by law or regulation;
  • Generate or interact with content prohibited in this Code of Conduct;
  • Be presented alongside or monetize content prohibited in this Code of Conduct;
  • Make decisions without appropriate human oversight if your application may have a consequential impact on any individual’s legal position, financial position, life opportunities, employment opportunities, human rights, or result in physical or psychological injury to an individual;
  • Use subliminal techniques (for example, visual or auditory signals beyond a normal person's range of perception) with the intent to deceive or cause harm.
  • Use purposefully manipulative or deceptive techniques with the objective or effect of distorting the behavior of a person by impairing their ability to make an informed decision;
  • Exploit any of the vulnerabilities of a person (e.g., age, disability, or socio-economic situation);
  • Be used for social scoring or predictive profiling that would lead to discriminatory, unfair, biased, detrimental, unfavorable, or harmful treatment of certain persons or groups of persons;
  • Categorize people based on their biometric data to infer characteristics or affiliations about them such as race, political opinions, trade union membership, religious or philosophical beliefs, or sex life or sexual orientation;
  • Be used to infer people’s sensitive attributes such as gender, race, nationality, religion, or specific age (not including age range, mouth state, and hair color);
  • Attempt to infer people’s emotional states from their physical, physiological, or behavioral characteristics (e.g., facial expressions, facial movements, or speech patterns);
  • Be used for chatbots that (i) are erotic, romantic, or used for companionship purposes, or which are otherwise prohibited by this Code of Conduct; (ii) are personas of specific people without their explicit consent; (iii) claim to have special wisdom/insight/knowledge, unless very clearly labeled as being for entertainment purposes only; or (iv) enable end users to create their own chatbots without oversight;
  • Except for the use cases permitted in the Limited Access form, be used to identify or verify individual identities based on media containing people’s faces or other physical, physiological, or behavioral characteristics, or as otherwise prohibited in this Code of Conduct;
  • Without the individual’s valid consent, be used for ongoing surveillance or real-time or near real-time identification or persistent tracking of the individual using any of their personal information, including biometric data;
  • Be used for facial recognition purposes by or for a state or local police department in the United States;
  • Be used for any real-time facial recognition technology on mobile cameras used by any law enforcement globally to attempt to identify individuals in uncontrolled, “in the wild” environments, which includes (without limitation) police officers on patrol using body-worn or dash-mounted cameras using facial recognition technology to attempt to identify individuals present in a database of suspects or prior inmates;
  • Be used to generate content with the purpose of removing or altering content credentials or other provenance methods, marks, or signals (“AI Content Credentials”) that indicate that the content was generated by a Microsoft Generative AI Service;
  • Be used to generate content with the purpose of misleading others about whether the content was generated by a Microsoft Generative AI Service; or
  • Be used to detect AI Content Credentials with the purpose of removing or altering them.

Content requirements

We prohibit the use of our services for processing, generating, classifying, or filtering content in ways that can inflict harm on individuals or society. Our content policies are intended to improve the safety of our platform.

These content requirements apply to (and references to Azure OpenAI Service below encompass) the use of all Microsoft Generative AI Services and Azure AI Content Safety. This includes, but is not limited to, use of features of Azure OpenAI Service and all content provided as input to or generated as output from all models available in Azure OpenAI Service, such as GPT-3, GPT-4, GPT-4 Turbo with Vision, Codex models, DALL·E 2, DALL·E 3, and Whisper. These requirements apply to the use of Azure AI Content Safety, including features such as Custom Categories, and to all content provided as input to the service and content generated as output from the service regardless of content filter settings.

Exploitation and Abuse

Child sexual exploitation and abuse

Azure OpenAI Service prohibits content that describes, features, or promotes child sexual exploitation or abuse, whether or not prohibited by law. This includes sexual content involving a child or that sexualizes a child.

Grooming

Azure OpenAI Service prohibits content that describes or is used for purposes of grooming of children. Grooming is the act of an adult building a relationship with a child for the purposes of exploitation, especially sexual exploitation. This includes communicating with a child for the purpose of sexual exploitation, trafficking, or other forms of exploitation.

Non-consensual intimate content

Azure OpenAI Service prohibits content that describes, features, or promotes non-consensual intimate activity.

Sexual solicitation

Azure OpenAI Service prohibits content that describes, features, or promotes, or is used for, purposes of solicitation of commercial sexual activity and sexual services. This includes encouragement and coordination of real sexual activity.

Trafficking

Azure OpenAI Service prohibits content describing or used for purposes of human trafficking. This includes the recruitment of individuals, facilitation of transport, and payment for, and the promotion of, exploitation of people such as forced labor, domestic servitude, sexual slavery, forced marriages, and forced medical procedures.

Suicide and Self-Injury

Azure OpenAI Service prohibits content that describes, praises, supports, promotes, glorifies, encourages and/or instructs individual(s) on self-injury or to take their life.

Facial recognition by U.S. police

Azure OpenAI Service prohibits identification or verification of individual identities using media containing people’s faces including by or for state or local police in the United States.

Emotional State and Sensitive Characteristics Analysis

Azure OpenAI Service prohibits the inferencing of a person’s emotional state from their physical, physiological, or behavioral characteristics. This includes inferring internal emotions such as anger, disgust, happiness, sadness, surprise, fear or other terms commonly used to describe the emotional state of a person. Azure OpenAI Service also prohibits the inference of a person’s sensitive attributes such as gender, race, nationality, religion, or specific age, not including their age range.

Violent Content and Conduct

Graphic violence and gore

Azure OpenAI Service prohibits content that describes, features, or promotes graphic violence or gore.

Terrorism and Violent Extremism

Azure OpenAI Service prohibits content that depicts an act of terrorism; praises, or supports a terrorist organization, terrorist actor, or violent terrorist ideology; encourages terrorist activities; offers aid to terrorist organizations or terrorist causes; or aids in recruitment to a terrorist organization.

Violent Threats, Incitement, and Glorification of Violence

Azure OpenAI Service prohibits content advocating or promoting violence toward others through violent threats or incitement.

Harmful Content

Hate speech and discrimination

Azure OpenAI Service prohibits content that attacks, denigrates, intimidates, degrades, targets, or excludes individuals or groups on the basis of traits such as actual or perceived race, ethnicity, national origin, gender, gender identity, sexual orientation, religious affiliation, age, disability status, caste, or any other characteristic that is associated with systemic prejudice or marginalization.

Bullying and harassment

Azure OpenAI Service prohibits content that targets individual(s) or group(s) with threats, intimidation, insults, degrading or demeaning language or images, promotion of physical harm, or other abusive behavior such as stalking.

Deception, disinformation, and inauthentic activity

Azure OpenAI Service prohibits content that is intentionally deceptive and likely to adversely affect the public interest, including deceptive or untrue content relating to health, safety, election integrity, or civic participation. Azure OpenAI Service also prohibits inauthentic interactions, such as fake accounts, automated inauthentic activity, impersonation to gain unauthorized information or privileges, and claims to be from any person, company, government body, or entity without explicit permission to make that representation.

Active malware or exploits

Content that supports unlawful active attacks or malware campaigns that cause technical harms, such as delivering malicious executables, organizing denial of service attacks, or managing command and control servers.

Additional content policies

We prohibit the use of our Azure OpenAI Service for scenarios in which the system is likely to generate undesired content due to limitations in the models or scenarios in which the system cannot be applied in a way that properly manages potential negative consequences to people and society. Without limiting the foregoing restriction, Microsoft reserves the right to revise and expand the above Content requirements to address specific harms to people and society.

This includes prohibiting content that is sexually graphic, including consensual pornographic content and intimate descriptions of sexual acts.

We may at times limit our service's ability to respond to particular topics, such as probing for personal information or seeking opinions on sensitive topics or current events, even if not prohibited by this Code of Conduct.

We prohibit the use of Azure OpenAI Service for activities that significantly harm other individuals, organizations, or society, including but not limited to use of the service for purposes in conflict with the applicable Azure Legal Terms and the Microsoft Product Terms.

Azure AI Content Safety must not be used to collect harmful content based on the above categories, or to classify, collect, or filter content in a way that would violate the other sections of this Code of Conduct, except as provided in the Limited Exception below.

Limited exception

Customers are permitted to provide, generate, classify, collect, and filter content in ways that would otherwise violate this Code of Conduct solely (1) to evaluate, train, and improve safety systems and applications for Customer’s use to the extent permitted by the Microsoft Product Terms and (2) to evaluate and test Microsoft Generative AI Services to the extent permitted by the Penetration Testing Rules of Engagement. Customers may use any resulting harmful content solely for evaluation and reporting and not for any other purpose. Customers remain responsible for all legal compliance relating to such content, including without limitation, retention, destruction, and reporting as necessary.

Report abuse

If you suspect that Azure OpenAI Service is being used in a manner that is abusive or illegal, infringes on your rights or the rights of other people, or violates these policies, you can report it at the Report Abuse Portal.

See also