Prepare for the implications of responsible AI

Completed

AI is the defining technology of our time. It's already enabling faster and more profound progress in nearly every field of human endeavor and helping to address some of society’s most daunting challenges. For example, AI can help people with visual disabilities understand images by generating descriptive text for images. In another example, AI can help farmers produce enough food for the growing global population.

At Microsoft, we believe that the computational intelligence of AI should be used to amplify the innate creativity and ingenuity of humans. Our vision for AI is to empower every developer to innovate, empower organizations to transform industries, and empower people to transform society.

Societal implications of AI

As with all great technological innovations in the past, the use of AI technology has broad impacts on society, raising complex and challenging questions about the future we want to see. AI has implications on decision-making across industries, data security and privacy, and the skills people need to succeed in the workplace. As we look to this future, we must ask ourselves:

  • How do we design, build, and use AI systems that create a positive impact on individuals and society?
  • How can we best prepare workers for the effects of AI?
  • How can we attain the benefits of AI while respecting privacy?

The importance of a responsible approach to AI

It's important to recognize that as new intelligent technology emerges and proliferates throughout society, with its benefits come unintended and unforeseen consequences. Some of these consequences have significant ethical ramifications and the potential to cause serious harm. While organizations can't predict the future yet, it's our responsibility to make a concerted effort to anticipate and mitigate the unintended consequences of the technology we release into the world through deliberate planning and continual oversight.

Threats

Each breakthrough in AI technologies brings a new reminder of our shared responsibility. For example, in 2016, Microsoft released a chatbot on X called Tay, which could learn from interactions with X users. The goal was to enable the chatbot to better replicate human communication and personality traits. However, within 24 hours, users realized that the chatbot could learn from bigoted rhetoric, and turned the chatbot into a vehicle for hate speech. This experience is one example of why we must consider human threats when designing AI systems.

Novel threats require a constant evolution in our approach to responsible AI. For example, because generative AI enables people to create or edit videos, images, or audio files so credibly that they look real, media authenticity is harder to verify. In response, Microsoft is teaming with other technology and news stakeholders to develop technical standards to address deepfake-related manipulation.

Note

To prepare for new types of attacks that could influence learning datasets, Microsoft developed technology such as advanced content filters and introduced supervisors for AI systems with automatic learning capabilities. Current generative AI models, such as those provided in Azure AI Services or Bing Chat, are built upon these insights.

Biased outcomes

Another unintended consequence that organizations should keep in mind is that AI may reinforce societal or other biases without deliberate planning and design. It's important for developers to understand how bias can be introduced into either training data or machine learning models. This problem can be pervasive in prebuilt models because the user may not be handling the training data themselves.

For example, consider a large financial lending institution that wants to develop a risk scoring system for loan approvals. When engineers test the system before deployment, they realize that it only approves loans for male borrowers. Since the system was trained on past customer's data, it reproduced the historical sexist bias of loan officers. Validating the system before deployment allowed us to identify and address the issue before the system was operative.

Note

At Microsoft, our researchers are exploring tools and techniques for detecting and reducing bias within AI systems. Prebuilt models are validated thoroughly, but nonetheless should be used wisely and their results should be always audited before taking action.

Sensitive use cases

Another illustration of our responsibility to mitigate unintended consequences is with sensitive technologies like facial recognition. Recently, there has been a growing demand for facial recognition technology, especially from law enforcement organizations that see the potential of the technology for use cases like finding missing children. However, we recognize that these technologies could put fundamental freedoms at risk. For example, they could enable continuous surveillance of specific individuals. We believe society has a responsibility to set appropriate boundaries for the use of these technologies, which includes ensuring governmental use of facial recognition technology remains subject to the rule of law.

While new laws and regulations must be written, they aren't a substitute for the responsibility that we all have while engaging with AI. By working together, businesses, governments, NGOs, and academic researchers alike can address sensitive use cases.

Note

Microsoft assesses and develops principles to govern our work with facial recognition technologies. We anticipate these principles will evolve over time as we continue to learn and partner with customers, other tech companies, academics, civil society, and others on this issue. Microsoft uses responsible AI practices to detect, prevent, and mitigate these issues, but any AI-related project should consider them as well.

Next, let's see how Microsoft’s six guiding principles for responsible AI can be applied within other organizations.