Microsoft AI Foundry tracing - prompts (input & output)

Patricia Torok (SK) 0 Reputation points
2025-12-02T14:46:11.54+00:00

Hi, I'm testing Microsoft Foundry. I want to use the tracing feature and observe my application, which utilizes LangGraph v1.

I tested the tracing functionality using the following documentation: https://learn.microsoft.com/en-us/azure/ai-foundry/how-to/develop/trace-agents-sdk?view=foundry-classic#integrations . However, the prompts seem to be off. There are blank prompt fields that are not populated.

Question 1: Is this normal behavior?

Question 2: What are those Http calls?

I'm using Python 3.12 with the following libraries:

  • langchain-azure-ai==1.0.3

langchain-core==1.1.0

langchain-openai==1.1.0

  • langgraph==1.0.4
  • langchain-openai==1.1.0
  • azure-identity==1.25.1 User's image
Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. SRILAKSHMI C 10,805 Reputation points Microsoft External Staff Moderator
    2025-12-03T07:15:49.3466667+00:00

    Hello Patricia Torok (SK),

    Welcome to Microsoft Q&A and Thank you for reaching out.

    I understand that you're encountering some issues with the tracing feature in Microsoft Foundry while using LangGraph v1.

    Let’s break down your questions:

    Question 1: Is this normal behavior?

    Blank prompt fields not being populated can be a sign of a few things. If the tracing feature is set up correctly, you should typically see prompts being populated. Here are a few things you might want to check:

    • Ensure that you've linked your Application Insights correctly in the project’s Monitor settings.
    • Verify that the credentials and environment variables (like APPLICATION_INSIGHTS_CONNECTION_STRING and AZURE_OPENAI_ENDPOINT) are set up properly.
    • Check if you are using both server-side and client-side tracing. If you're only using server-side tracing, it may not capture all the necessary details, leading to incomplete spans.

    Question 2: What are those HTTP calls?

    The HTTP calls in the tracing context usually refer to the interactions between your application, the Azure AI services, and the tracing infrastructure (like Application Insights). For details on those calls, you can:

    • Enable logging for your Azure OpenAI API Management to view the requests being sent and received.
    • Use the tracing capabilities to monitor the spans and identify the specific HTTP calls that are being made during your application’s processing.

    Suggested Steps to Troubleshoot:

    1. Review your Tracing Setup: Go through the setup steps in the relevant documentation and make sure everything is configured correctly.
    2. Check Logs: Look at the logs in Application Insights to see if there are any errors or missed data that might explain the blank prompts.
    3. Testing Configuration: Run some test queries to see if the tracing behavior changes and provides the expected prompts.
    4. Documentation Reference: Use the tracing integration guide to make sure you have implemented everything as expected.

    Please refer this,

    I Hope this helps. Do let me know if you have any further queries.


    If this answers your query, please do click Accept Answer and Yes for was this answer helpful.

    Thank you!


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.