Operationalize generative AI applications (GenAIOps)

Learn how to operationalize generative AI applications using the complete GenAIOps lifecycle. This learning path covers planning and preparing GenAIOps solutions, managing prompts for agents with version control, evaluating and optimizing agents through structured experiments, automating evaluations with Microsoft Foundry and GitHub Actions, monitoring application performance and costs, and implementing distributed tracing to debug complex AI workflows.

Prerequisites

Before starting this learning path, you should be familiar with fundamental generative AI concepts and services in Azure. Consider completing the Microsoft Azure AI Fundamentals: Generative AI learning path first.

Modules in this learning path

Learn how to develop chat applications with language models using a code-first development approach. By developing generative AI apps code-first, you can create robust and reproducible flows that are integral for generative AI Operations, or GenAIOps.

Learn how to manage AI prompts as versioned assets using GitHub. Apply software engineering best practices to create, test, and promote prompt versions used in Microsoft Foundry as part of a GenAIOps workflow.

Learn how to optimize AI agents through structured evaluation that transforms guesswork into evidence-based engineering decisions. You'll explore how to design evaluation experiments with clear metrics for quality, cost, and performance; organize experiments using Git-based workflows; create evaluation rubrics for consistent scoring; and compare results to make informed optimization decisions.

Learn how to implement automated evaluations for AI agent responses using Microsoft Foundry evaluators, create evaluation datasets from production data and synthetic generation, run batch evaluations with Python scripts, and integrate evaluation workflows into GitHub Actions for continuous quality assurance.

Learn how to monitor the performance of your generative AI application using Microsoft Foundry. This module teaches you to track key metrics like latency and token usage to make informed, cost-effective deployment decisions.

Learn how to implement tracing in your generative AI applications using Microsoft Foundry and OpenTelemetry. This module teaches you to capture detailed execution flows, debug complex workflows, and understand application behavior for better reliability and optimization.