Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Artificial intelligence (AI) is the capability of a computer system to mimic human-like cognitive functions such as learning and problem-solving. An artificially intelligent computer system makes predictions or takes actions based on patterns in existing data and can then learn from its errors to increase its accuracy. A mature AI system processes new information quickly and accurately, which makes it useful for complex scenarios such as self-driving cars, image recognition programs, and virtual assistants.
Businesses around the world already use AI in a wide variety of applications, and intelligent technology is a growing field. As AI becomes more ubiquitous, use of AI in and for your product development must be a key component of your globalization strategy.
Two examples of AI in global product development are:
- using AI for translation
- ensuring that AI-based features work correctly for users in all target markets
What is responsible AI?
As artificial intelligence (AI) plays a larger role in our daily lives, it's more important than ever that AI systems are built to provide a helpful, safe, and trustworthy experience for everyone around the world. Microsoft defines six principles as the foundation for Responsible AI practices. Responsible AI practices are intended to keep people and their goals at the center of the design process and considers the benefits and potential harms that AI systems can have on society. These principles are:
- Fairness – AI systems should treat all people fairly.
- Reliability and safety – AI systems should perform reliably and safely.
- Privacy and security –AI systems should be secure and respect privacy.
- Inclusiveness – AI systems should empower everyone and engage people.
- Transparency – AI systems should be understandable.
- Accountability – People should be accountable for AI systems.
For more information about Microsoft’s approach to responsible AI, see https://www.microsoft.com/ai/responsible-ai.
Avoiding potential AI bias with training data
Machine learning (ML) is the process of using mathematical models of data to help a computer learn without direct instruction. ML is considered a subset of artificial intelligence (AI). Machine learning uses algorithms to identify patterns within data, and those patterns are then used to create a data model that can make predictions. The adaptability of machine learning makes it a great choice in scenarios where the data is always changing, the nature of the task is always shifting, or coding a solution would be effectively impossible.
The model that ML generates is defined by the data on which it was trained. The choice of training data can affect how the AI system based on the model performs. If the training data contains historical prejudices and stereotypes, the AI output might reflect the same prejudices and stereotypes. This isn't a desirable outcome for responsible AI.
For example, facial recognition systems trained predominantly on a sample of faces from one region might perform poorly on individuals from other regions. An AI-generated response could use a word or phrase that’s acceptable in one culture but might be offensive in another. Or an AI-generated image could unintentionally be alienating to an entire segment of an audience, even if the image itself isn’t offensive. For example, displaying a snowscape to represent a time of year to users in the northern hemisphere of the globe wouldn’t be appropriate for users in the southern hemisphere where they're currently experiencing summer.
Inclusive design aims to address AI bias by using diverse and representative datasets, and by involving stakeholders from different backgrounds in the design and evaluation process. Additionally, ensuring language diversity in training data ensures that AI systems perform well across different linguistic groups.