Discuss practices for responsible AI at Microsoft

Completed

It can be challenging to design and implement an effective AI governance system. In this unit, we take Microsoft as the example and explain how Microsoft ensures responsible AI is followed across the company. Based on this use case, consider how you could apply these ideas in your own organization.

In the following video, Natasha Crampton, Vice President and Chief Responsible AI Officer at Microsoft, provides an overview of how Microsoft enforces responsible AI practices.

Our governance structure uses a hub-and-spoke model to provide the accountability and authority to drive initiatives while also enabling responsible AI policies to be implemented at scale. That is, it combines the centralized and decentralized approaches discussed in the last unit.

Centralized governance

There are three bodies at Microsoft to provide centralized governance: the Senior Leadership Team, the Office of Responsible AI, and the Aether Committee. An important hallmark of our approach to responsible AI is having this ecosystem to operationalize responsible AI across the company, rather than a single organization or individual leading this work.

Senior Leadership Team

The Senior Leadership Team is ultimately accountable for the company’s direction on responsible AI. This group is the final decision-maker on the most sensitive, novel, and significant AI development and deployment matters. It sets the company’s AI principles, values, and human rights commitments.

Office of Responsible AI

The Office of Responsible AI implements and maintains our commitment to responsible AI governance by working with stakeholders across the company to:

  • Develop and maintain our governance framework.
  • Define roles and responsibilities for governing bodies.
  • Implement a company-wide reporting and decision-making process.
  • Orchestrate responsible AI training for all employees.

The Office of Responsible AI has four key functions:

  • Internal policy: Setting the company-wide rules for enacting responsible AI, and defining roles and responsibilities for teams involved in this effort.
  • Enablement: Readiness to adopt responsible AI practices, both within our company, and among our customers and partners.
  • Case management: Review of sensitive use cases to help ensure that our development and deployment work upholds our AI principles.
  • Public policy: Help to shape new laws, norms, and standards. The goal of this policy is to ensure that the promise of AI technology is realized for the benefit of society at large.

Aether Committee

The Aether Committee (AI, Ethics, and Effects in Engineering and Research) serves an advisory role to the senior leadership, the Office of Responsible AI, and other teams across the company. It provides guidance on questions, challenges, and opportunities with the development and fielding of AI technologies.

The Aether Committee has six working groups that focus on specific subjects, grounded in our AI principles. The working groups develop tools, best practices, and tailored implementation guidance related to their respective areas of expertise. Learnings from the working groups and main committee are key in developing new policies, and declining or placing limits on sensitive use cases.

Decentralized governance

Enacting responsible AI at scale across an organization relies on a strong network across the company to help implement organization-wide rules, drive awareness, and request support on issues that raise questions about application of our AI principles.

Responsible AI Champs

Our network includes Responsible AI Champs, employees nominated by their leadership teams from within key engineering and field teams. They serve as responsible AI advisors (in addition to their full-time roles), focusing on informing decision-makers, instead of policing.

The Responsible AI Champs have five key functions:

  • Raising awareness of responsible AI principles and practices within teams and workgroups.
  • Helping teams and workgroups implement prescribed practices throughout the AI feature, product, or service lifecycle.
  • Advising leaders on the benefit of responsible AI development–and the potential effect of unintended harms.
  • Identifying and escalating questions and sensitive uses of AI through available channels.
  • Fostering a culture of customer-centricity and global perspective, by growing a community of Responsible AI evangelists in their organizations and beyond.

To develop and deploy AI with minimal friction to engineering practices and customers, we're investing in patterns, practices, and tools. Some engineering groups have assembled teams to help them follow the company-wide rules and accelerate the development of implementation patterns, practices, and tools.

Every employee

The final and most important part of our approach to responsible AI is the role that every employee plays, with support from their managers and business leaders. Responsible AI is a key part of mandatory employee training and we have released more educational assets that enable employees to delve deeper into areas of responsible AI. We also have numerous responsible AI development tools to enable our employees to develop responsibly. These resources empower all our employees to advance the company’s important work with AI, and, at the same time, they're responsible for upholding our responsible AI principles and following the company-wide practices we have adopted in pursuit of that end.

We expect every Microsoft employee to:

  • Develop a general understanding of our AI principles.
  • Report and escalate sensitive uses.
  • Contact their Responsible AI Champ when they need guidance on responsible AI.

Next, let's see this governance model in action in flagging and addressing sensitive use cases of AI.