Share via

AI-Foundry Tracing Issue

Daun (Daniel) Lee 190 Reputation points
2025-11-25T01:09:35.8366667+00:00

Hi, I am testing AI Foundry Tracing like LangSmith and since I am using LangChain I am trying to Integration Tracing as guided here.

https://ai.azure.com/nextgen/r/s8RkQsLiTkCIVp2k-URInA,00_AI_RG,,KT-OpenAI,KT-OpenAI-project/docs/observability/how-to/trace-agent-framework

Question1 : This is my result. Why my result is all separate and per one LLM call?

Question 2: How can I change the name : "chat gpt-4.1-mini-2025-04-14"

This is what I did.

  1. Install

pip install langchain-azure-ai azure-monitor-opentelemetry-exporter rich azure-monitor-opentelemetry --no-deps

  1. tracer declaration

from langchain_azure_ai.callbacks.tracers import AzureAIOpenTelemetryTracer

conn=os.environ.get([APPLICATION_INSIGHTS_CONNECTION_STRING from AI Foundry])

print(conn)

azure_tracer = AzureAIOpenTelemetryTracer(

    connection_string=conn,

    enable_content_recording=True,

    name="Weather information agent"

)

tracers = [azure_tracer]

  1. Add to model (tried 1.0 version method in the docs but didn't work so tried 0.3 version way and at least the log appeared in the AI Foundry portal.)

import os

from langchain_openai import AzureChatOpenAI

 

llm = AzureChatOpenAI(

    model=os.environ['MODEL_NAME'],

    azure_deployment=os.environ["MODEL_NAME"],

    azure_endpoint=os.environ["END_POINT"],

    openai_api_version="2025-03-01-preview",

    openai_api_key=os.environ["AZURE_OPENAI_API_KEY"],

    callbacks=[azure_tracer]

)

llm.invoke("Hello")

My Resultsconversation_my2

conversation_my

Documentation Screenshotconversation

Foundry Tools
Foundry Tools

Formerly known as Azure AI Services or Azure Cognitive Services is a unified collection of prebuilt AI capabilities within the Microsoft Foundry platform


Answer accepted by question author

  1. Anshika Varshney 9,905 Reputation points Microsoft External Staff Moderator
    2025-11-25T10:34:54.4733333+00:00

    Hi Daun (Daniel) Lee,

    Thanks for sharing the details!

    Separate traces per LLM call – This happens because llm.invoke() is treated as an individual run in LangChain, so AI Foundry logs each call separately. To group traces, wrap your logic in a parent span or use a full chain/agent instead of calling the raw model directly.

    Changing the name – The name shown (e.g., chat gpt-4.1-mini-2025-04-14) comes from your Azure OpenAI deployment name. To update it, rename or recreate the deployment with the desired name in Azure.

    For more guidance, check the official docs: Trace AI Agents in Azure AI Foundry

    Please let me know if there are any remaining questions or additional details, I can help with, I’ll be glad to provide further clarification or guidance.


Answer accepted by question author

  1. Adam Zachary 2,265 Reputation points
    2025-11-25T01:34:05.3666667+00:00

    I’ve run into this before, and both issues come from how you’re calling the LLM.

    • You see separate traces because you’re calling llm.invoke() directly. LangChain treats each call as its own run, so Foundry logs each LLM call as a separate trace. If you want a single grouped trace, you need to wrap your logic in a parent span or call an actual chain/agent instead of the raw model.
    • The name “chat gpt-4.1-mini-2025-04-14” comes from the Azure OpenAI deployment name. Foundry is just showing whatever deployment name your code is using. To change it, rename or recreate the deployment with a different name.

2 additional answers

Sort by: Most helpful
  1. Deleted

    This answer has been deleted due to a violation of our Code of Conduct. The answer was manually reported or identified through automated detection before action was taken. Please refer to our Code of Conduct for more information.


    Comments have been turned off. Learn more

  2. Deleted

    This answer has been deleted due to a violation of our Code of Conduct. The answer was manually reported or identified through automated detection before action was taken. Please refer to our Code of Conduct for more information.


    Comments have been turned off. Learn more

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.