Events
May 19, 6 PM - May 23, 12 AM
Calling all developers, creators, and AI innovators to join us in Seattle @Microsoft Build May 19-22.
Register todayThis browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Important
Items marked (preview) in this article are currently in public preview. This preview is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
In this article, you'll learn how to trace your application with Azure AI Foundry SDK with your choice between using Python, JavaScript, or C#. This provides support for tracing with OpenTelemetry.
The best way to get started using the Azure AI Foundry SDK is by using a project. AI projects connect together different data, assets, and services you need to build AI applications. The AI project client allows you to easily access these project components from your code by using a single connection string. First follow steps to create an AI Project if you don't have one already. To enable tracing, first ensure your project has an attached Application Insights resource. Go to the Tracing page of your project in Azure AI Foundry portal and follow instructions to create or attach Application Insights. If one was enabled, you can get the Application Insights connection string, and observe the full execution path through Azure Monitor.
Make sure to install following packages via
pip install opentelemetry-sdk
pip install azure-core-tracing-opentelemetry
pip install azure-monitor-opentelemetry
Refer the following samples to get started with tracing using Azure AI Project SDK:
Install the package azure-ai-inference
using your package manager, like pip:
pip install azure-ai-inference[opentelemetry]
Install the Azure Core OpenTelemetry Tracing plugin, OpenTelemetry, and the OTLP exporter for sending telemetry to your observability backend. To install the necessary packages for Python, use the following pip commands:
pip install opentelemetry-sdk
pip install opentelemetry-exporter-otlp
To learn more about Azure AI Inference SDK for Python and observability, see Tracing via Inference SDK for Python.
To learn more, see the Inference SDK reference.
You need to add following configuration settings as per your use case:
AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED
environment variable to true (case insensitive). By default, prompts, completions, function names, parameters, or outputs aren't recorded.
To learn more, see Azure Core Tracing OpenTelemetry client library for Python.The final step is to enable Azure AI Inference instrumentation with the following code snippet:
from azure.ai.inference.tracing import AIInferenceInstrumentor
# Instrument AI Inference API
AIInferenceInstrumentor().instrument()
It's also possible to uninstrument the Azure AI Inference API by using the uninstrument call. After this call, the traces will no longer be emitted by the Azure AI Inference API until instrument is called again:
AIInferenceInstrumentor().uninstrument()
To trace your own custom functions, you can leverage OpenTelemetry, you'll need to instrument your code with the OpenTelemetry SDK. This involves setting up a tracer provider and creating spans around the code you want to trace. Each span represents a unit of work and can be nested to form a trace tree. You can add attributes to spans to enrich the trace data with additional context. Once instrumented, configure an exporter to send the trace data to a backend for analysis and visualization. For detailed instructions and advanced usage, refer to the OpenTelemetry documentation. This will help you monitor the performance of your custom functions and gain insights into their execution.
To identify your service via a unique ID in Application Insights, you can use the service name OpenTelemetry property in your trace data. This is particularly useful if you're logging data from multiple applications to the same Application Insights resource, and you want to differentiate between them. For example, lets say you have two applications: App-1 and App-2, with tracing configured to log data to the same Application Insights resource. Perhaps you'd like to set up App-1 to be evaluated continuously by Relevance and App-2 to be evaluated continuously by Groundedness. You can use the service name to differentiate between the applications in your Online Evaluation configurations.
To set up the service name property, you can do so directly in your application code by following the steps, see Using multiple tracer providers with different Resource. Alternatively, you can set the environment variable OTEL_SERVICE_NAME
prior to deploying your app. To learn more about working with the service name, see OTEL Environment Variables and Service Resource Semantic Conventions.
To query trace data for a given service name, query for the cloud_roleName
property. In case you're leveraging Online Evaluation, add the following line to the KQL query you use within your Online Evaluation set-up:
| where cloud_RoleName == "service_name"
You can enable tracing for Langchain that follows OpenTelemetry standards as per opentelemetry-instrumentation-langchain To enable tracing for Langchain, follow following steps:
Install the package opentelemetry-instrumentation-langchain
using your package manager, like pip:
pip install opentelemetry-instrumentation-langchain
Once necessary packages are installed, you can easily enable tracing via Tracing using Azure AI Foundry project library
To attach user feedback to traces and visualize them in Azure AI Foundry portal using OpenTelemetry's semantic conventions, you can instrument your application enabling tracing and logging user feedback. By correlating feedback traces with their respective chat request traces using the response ID, you can use view and manage these traces in Azure AI Foundry portal. OpenTelemetry's specification allows for standardized and enriched trace data, which can be analyzed in Azure AI Foundry portal for performance optimization and user experience insights. This approach helps you use the full power of OpenTelemetry for enhanced observability in your applications.
To log user feedback, follow this format: The user feedback evaluation event can be captured if and only if user provided a reaction to GenAI model response. It SHOULD, when possible, be parented to the GenAI span describing such response.
The event name MUST be gen_ai.evaluation.user_feedback
.
Attribute | Type | Description | Examples | Requirement Level | Stability |
---|---|---|---|---|---|
gen_ai.response.id |
string | The unique identifier for the completion. | chatcmpl-123 |
Required |
|
gen_ai.evaluation.score |
double | Quantified score calculated based on the user reaction in [-1.0, 1.0] range with 0 representing a neutral reaction. | 0.42 |
Recommended |
The user feedback event body has the following structure:
Body Field | Type | Description | Examples | Requirement Level |
---|---|---|---|---|
comment |
string | Additional details about the user feedback | "I did not like it" |
Opt-in |
Events
May 19, 6 PM - May 23, 12 AM
Calling all developers, creators, and AI innovators to join us in Seattle @Microsoft Build May 19-22.
Register today