How do I see the logs of my model calls on Azure OpenAI?

Kab 0 Reputation points
2025-07-21T15:32:00.4533333+00:00

Hello!

I've set up log analytics in Azure OpenAI and activated Request+Response logging, and can see model calls being made. However, how do I see the logs of my actual model calls, with the details and content of the model output? I'm looking to something as similar to OpenAI's native logs (https://platform.openai.com/logs) as possible.

My current KQL query:

AzureDiagnostics
| where ResourceProvider == "MICROSOFT.COGNITIVESERVICES"
| where OperationName == "Completions_Create"
| order by TimeGenerated desc

Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
0 comments No comments
{count} votes

2 answers

Sort by: Most helpful
  1. Chakaravarthi Rangarajan Bhargavi 1,205 Reputation points MVP
    2025-07-22T16:49:29.2333333+00:00

    Hi Kab,

    Welcome to Microsoft Q&A!

    It's great to hear that you've already enabled Log Analytics and configured Request + Response logging for your Azure OpenAI resource. You're on the right track!

    To view detailed logs of your model calls (inputs and outputs) in a way similar to OpenAI platform logs, here's how you can go deeper:

    Step-by-step: View Model Call Logs with Request & Response Details

    1. Confirm Diagnostic Settings Ensure you have enabled logging to a Log Analytics workspace and selected:
      • AllLogs
      • AuditLogs
      • RequestResponseLogs (this is critical for payload visibility)
      Refer: Enable diagnostic logging
    2. Use KQL to Query Logs You can use the following refined KQL query to retrieve request and response content:
         AzureDiagnostics
      

    | where ResourceProvider == "MICROSOFT.COGNITIVESERVICES" | where OperationName == "Completions_Create" or OperationName == "ChatCompletions_Create" | project TimeGenerated, Resource, OperationName, ResultType, requestPayload_s, responsePayload_s | order by TimeGenerated desc

       
       > `requestPayload_s` contains the **prompt or chat messages** 
       > `responsePayload_s` shows the **model's generated response**
       
       **Filter or Expand** You can also add filters like `ResultType` or `Resource` to narrow down by success/failure or specific resource group.
       
       **View in Azure Monitor or Workbook** For ongoing monitoring, you may pin this query to a workbook or dashboard for better visibility.
       
    Please note the below items:
    
    Logs may take a few minutes to appear after execution.
    
    If you're not seeing `requestPayload_s` or `responsePayload_s`, double-check that **"RequestResponseLogs"** is enabled in your diagnostics settings.
    
    Some sensitive data may be redacted for compliance.
    
    Please refer the below articles if thats useful too
    
    [Supported Logs for Cognitive Services](https://learn.microsoft.com/en-us/azure/azure-monitor/reference/supported-logs/microsoft-cognitiveservices-accounts-logs)
    
    [Log Analytics KQL Reference](https://learn.microsoft.com/en-us/azure/azure-monitor/logs/log-query-overview)
    
    [Azure AI Engineer Certification Guide](https://learn.microsoft.com/en-us/credentials/certifications/azure-ai-engineer)
    
    Feel free to follow up if you'd like help customizing your query further or exporting results! Happy building with Azure OpenAI. 
    
    Regards, 
    
     **Bhargavi Chakaravarthi Rangarajan** 
    
    - If this answer helped you, please consider clicking **"Accept Answer"** and **upvoting** it so others in the community can benefit too.
    
    

  2. Manas Mohanty 13,935 Reputation points Microsoft External Staff Moderator
    2025-08-08T05:57:04.3+00:00

    Hi Kab

    Good day.

    We have "Store completion " option to record inputs and outputs from model which you can use for evaluation or finetuning purpose

    User's image

    To enable stored completions for your Azure OpenAI deployment set the store parameter to True****. Use the metadata parameter to enrich your stored completion dataset with additional information.

    completion = client.chat.completions.create(
        
        model="gpt-4o", # replace with model deployment name
        store= True,
        metadata =  {
        "user": "admin",
        "category": "docs-test",
      },
        messages=[
        {"role": "system", "content": "Provide a clear and concise summary of the technical content, highlighting key concepts and their relationships. Focus on the main ideas and practical implications."},
        {"role": "user", "content": "Ensemble methods combine multiple machine learning models to create a more robust and accurate predictor. Common techniques include bagging (training models on random subsets of data), boosting (sequentially training models to correct previous errors), and stacking (using a meta-model to combine base model predictions). Random Forests, a popular bagging method, create multiple decision trees using random feature subsets. Gradient Boosting builds trees sequentially, with each tree focusing on correcting the errors of previous trees. These methods often achieve better performance than single models by reducing overfitting and variance while capturing different aspects of the data."}
        ]   
    )
    
    
    

    Reference used - https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/stored-completions?tabs=python-secure

    Hope it eases the work of searching chat completion logs.

    Thank you.


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.