Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Responsible AI refers to the practice of designing, developing, and deploying artificial intelligence systems in a way that is ethical, transparent, and aligned with human values. It emphasizes fairness, accountability, privacy, and safety to ensure that AI technologies benefit individuals and society as a whole. As AI becomes increasingly integrated into applications and decision-making processes, prioritizing responsible AI is of utmost importance.
Microsoft has identified six principles for responsible AI:
- Fairness
- Reliability and safety
- Privacy and security
- Inclusiveness
- Transparency
- Accountability
If you're building an AI app with .NET, the 📦 Microsoft.Extensions.AI.Evaluation.Safety package provides evaluators to help ensure that the responses your app generates, both text and image, meet the standards for responsible AI. The evaluators can also detect problematic content in user input. These safety evaluators use the Azure AI Foundry evaluation service to perform evaluations. They include metrics for hate and unfairness, groundedness, ungrounded inference of human attributes, and the presence of:
- Protected material
- Self-harm content
- Sexual content
- Violent content
- Vulnerable code (text-based only)
- Indirect attacks (text-based only)
For more information about the safety evaluators, see Safety evaluators. To get started with the Microsoft.Extensions.AI.Evaluation.Safety evaluators, see Tutorial: Evaluate response safety with caching and reporting.