Introduction
The generative AI development lifecycle is an iterative process used by Microsoft’s engineering teams to develop generative AI products and features.
The image illustrates the 'Generative AI Development Lifecycle' with a flowchart-style diagram. The diagram is set on a light background and uses a blue and white color scheme with accent colors for icons. The lifecycle begins with 'Define your use case' on the left, which leads into a large rounded rectangle containing the main stages of the process. Within this main area, there are five key components arranged horizontally: map, measure, mitigate, operate, govern. The layout implies a sequential flow from Map to Operate, with Govern overarching all stages. A large arrow forms a loop around the entire process, starting from 'Define your use case', moving clockwise through all stages, and then returning to the beginning. This suggests a continuous, iterative lifecycle for AI development.
While each step in the process is essential to building trustworthy generative AI solutions, the measurement phase is critical to iterative development and getting apps into production. Azure AI provides a robust toolkit for evaluating generative AI in a repeatable, transparent way. This module introduces the key concepts of measuring the frequency and severity of various risks in AI systems.
Learning objectives
By completing this module, you're able to:
- Apply best practices for choosing evaluation data
- Understood the purpose of and types of synthetic data for evaluation
- Comprehend the scope of the built-in metrics
- Choose the appropriate metrics based on your AI system use case
- Understand how to interpret evaluation results
Prerequisites
- An Azure subscription – Create one for free
- Familiarity with Azure and the Azure portal