Hendelser
17. mars, 21 - 21. mars, 10
Bli med i meetup-serien for å bygge skalerbare AI-løsninger basert på virkelige brukstilfeller med andre utviklere og eksperter.
Registrer deg nåDenne nettleseren støttes ikke lenger.
Oppgrader til Microsoft Edge for å dra nytte av de nyeste funksjonene, sikkerhetsoppdateringene og den nyeste tekniske støtten.
The Microsoft.Extensions.AI.Evaluation libraries (currently in preview) simplify the process of evaluating the quality and accuracy of responses generated by AI models in .NET intelligent apps. Various metrics measure aspects like relevance, truthfulness, coherence, and completeness of the responses. Evaluations are crucial in testing, because they help ensure that the AI model performs as expected and provides reliable and accurate results.
The evaluation libraries, which are built on top of the Microsoft.Extensions.AI abstractions, are composed of the following NuGet packages:
The libraries are designed to integrate smoothly with existing .NET apps, allowing you to leverage existing testing infrastructures and familiar syntax to evaluate intelligent apps. You can use any test framework (for example, MSTest, xUnit, or NUnit) and testing workflow (for example, Test Explorer, dotnet test, or a CI/CD pipeline). The library also provides easy ways to do online evaluations of your application by publishing evaluation scores to telemetry and monitoring dashboards.
The evaluation libraries were built in collaboration with data science researchers from Microsoft and GitHub, and were tested on popular Microsoft Copilot experiences. The following table shows the built-in evaluators.
Metric | Description | Evaluator type |
---|---|---|
Relevance, truth, and completeness | How effectively a response addresses a query | RelevanceTruthAndCompletenessEvaluator |
Fluency | Grammatical accuracy, vocabulary range, sentence complexity, and overall readability | FluencyEvaluator |
Coherence | The logical and orderly presentation of ideas | CoherenceEvaluator |
Equivalence | The similarity between the generated text and its ground truth with respect to a query | EquivalenceEvaluator |
Groundedness | How well a generated response aligns with the given context | GroundednessEvaluator |
You can also customize to add your own evaluations by implementing the IEvaluator interface or extending the base classes such as ChatConversationEvaluator and SingleNumericMetricEvaluator.
The library uses response caching functionality, which means responses from the AI model are persisted in a cache. In subsequent runs, if the request parameters (prompt and model) are unchanged, responses are then served from the cache to enable faster execution and lower cost.
The library contains support for storing evaluation results and generating reports. The following image shows an example report in an Azure DevOps pipeline:
The dotnet aieval
tool, which ships as part of the Microsoft.Extensions.AI.Evaluation.Console
package, also includes functionality for generating reports and managing the stored evaluation data and cached responses.
The libraries are designed to be flexible. You can pick the components that you need. For example, you can disable response caching or tailor reporting to work best in your environment. You can also customize and configure your evaluations, for example, by adding customized metrics and reporting options.
For a more comprehensive tour of the functionality and APIs available in the Microsoft.Extensions.AI.Evaluation libraries, see the API usage examples (dotnet/ai-samples repo). These examples are structured as a collection of unit tests. Each unit test showcases a specific concept or API and builds on the concepts and APIs showcased in previous unit tests.
.NET-tilbakemelding
.NET er et åpen kilde-prosjekt. Velg en kobling for å gi tilbakemelding:
Hendelser
17. mars, 21 - 21. mars, 10
Bli med i meetup-serien for å bygge skalerbare AI-løsninger basert på virkelige brukstilfeller med andre utviklere og eksperter.
Registrer deg nåOpplæring
Modul
Kjør evalueringer og generer syntetiske datasett - Training
Lær hvordan du kjører evalueringer og genererer syntetiske datasett med Azure AI Evaluation SDK.
Sertifisering
Microsoft Certified: Grunnleggende om Azure AI - Certifications
Demonstrere grunnleggende AI-konsepter knyttet til utvikling av programvare og tjenester i Microsoft Azure for å opprette AI-løsninger.
Dokumentasjon
Tutorial: Evaluate an LLM's prompt completions - .NET
Evaluate the coherence, relevance, and groundedness of an LLM's prompt completions using Azure OpenAI and the Semantic Kernel SDK for .NET.
Create a simple recipe app using the RAG pattern and vector search using Azure Cosmos DB for MongoDB.
Get started with the 'chat using your own data sample' for .NET - .NET
Get started with .NET and search across your own data using a chat app sample implemented using Azure OpenAI Service and Retrieval Augmented Generation (RAG) in Azure AI Search. Easily deploy with Azure Developer CLI. This article uses the Azure AI Reference Template sample.