Share via

How can we access detailed agent/system logs for troubleshooting issues inside Managed DevOps Pools?

Praveen kumar Singh 0 Reputation points
2026-04-08T14:50:30.2733333+00:00

We are using Managed DevOps Pools (MDP) in Azure Pipelines and have reviewed the documentation, including:

  • Troubleshoot Managed DevOps Pools issues
  • MDP FAQ
  • Script for validating MDP pool connectivity

These resources are helpful, but they do not explain how to access detailed agent logs when debugging issues inside an MDP-run pipeline.

Is there an official or supported method to view, collect, or export detailed agent/system logs from Managed DevOps Pools for troubleshooting?

Specifically, we want to know:

  1. Are there hidden diagnostic logs that can be downloaded from Azure DevOps?
  2. Does MDP store internal logs that customers can request from Microsoft Support?
  3. Is there an equivalent to the _diag folder used in self-hosted agents?
  4. Are there upcoming features that will expose deeper logging for MDP?

This makes it difficult to troubleshoot:

  • Authentication issues
  • Task/tool failures
  • Network/endpoint connectivity
  • Environment configuration issues
  • Internal agent exceptions

We are not looking for logs from custom tools or internal scripts — only the agent/environment logs that help understand failures occurring inside the managed pool.

Since MDP is now GA and recommended over VMSS agents, we want to understand what diagnostic capabilities are available.

Azure DevOps

2 answers

Sort by: Most helpful
  1. Siddhesh Desai 6,310 Reputation points Microsoft External Staff Moderator
    2026-04-08T16:25:57.04+00:00

    Hi @Praveen kumar Singh

    Thank you for reaching out to Microsoft Q&A.

    Managed DevOps Pools (MDP) are designed as a fully managed service where the build agents run in Microsoft‑owned infrastructure rather than in the customer’s Azure subscription. Because of this design, customers do not have OS‑level or agent‑service access to the underlying machines. As a result, traditional self‑hosted agent diagnostics such as the _diag folder or raw agent/system logs are not exposed. There are no hidden or downloadable agent logs in Azure DevOps for MDP, and there is no supported mechanism to export internal agent or OS logs. This limitation is intentional and aligns with the managed nature of the service only supported, surfaced diagnostics are available to customers.

    Refer below points to resolve this issue or this is the workaround:

    Enable verbose pipeline and agent diagnostics

    You can enable the deepest supported logging by turning on system diagnostics for the pipeline run.

    • From the pipeline UI: Run pipeline > Enable system diagnostics > Run
    • Or in YAML, add:
    variables:
     system.debug: true
    

    This enables verbose task and agent logging, which is the maximum level of agent visibility supported in Managed DevOps Pools.

    Use Azure Monitor diagnostic settings for the Managed DevOps Pool resource

    Managed DevOps Pools support Azure resource diagnostic logs, but only for pool‑level and provisioning information.

    • Configure this in Azure Portal > Managed DevOps Pool > Monitoring > Diagnostic settings
    • Available logs include Provisioning Logs and Metrics These logs help with agent provisioning, scaling, and pool lifecycle troubleshooting, but they do not include job‑execution or agent OS logs.

    Understand the limitation compared to self‑hosted agents There is no equivalent of the _diag folder in Managed DevOps Pools. Agent OS logs, network stack logs, and internal agent service logs are not customer‑accessible in MDP. This behavior is by design due to the agents running in Microsoft‑managed infrastructure.

    Engage Microsoft Support for suspected platform issues If you suspect a service‑side issue, you can open a Microsoft Support ticket and share pipeline run IDs and timestamps. Microsoft Support can review internal telemetry, but internal logs are not exported or shared with customers.

    Consider self‑hosted or VMSS agents if deep diagnostics are required If your scenarios require full OS‑level access, deep network troubleshooting, or agent service logs, the recommended workaround is to use self‑hosted agents or VM Scale Set agents instead of Managed DevOps Pools.

    0 comments No comments

  2. Q&A Assist
    2026-04-08T14:51:21.0233333+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    Managed DevOps Pools use the standard Azure Pipelines agent and pipeline logging model. There is no separate, hidden, or special customer-accessible agent log store for Managed DevOps Pools beyond what Azure Pipelines already exposes through pipeline logs, diagnostic logs, and Azure resource diagnostics.

    Within what is documented, the supported options for deeper diagnostics are:

    1. Use verbose pipeline logs and agent diagnostics
    • Enable verbose logs per run by choosing Run pipelineEnable system diagnosticsRun.
    • Or set the variable system.debug to true in the pipeline to make all runs verbose.
    • When system.debug is true, the Agent.Diagnostic variable is automatically set to true, which enables additional agent-side diagnostic logging for supported agent versions.
    • With Agent.Diagnostic = true, extra diagnostic files are produced (for self-hosted agents) such as:
      • Agent.* (environment variables)
      • environment.txt / agent.* (account group membership)
      • cloudinit.*, BrokenPackages.*, waagentConf.txt on Azure VMs

    These diagnostics are surfaced through the pipeline log download mechanism, not as a separate _diag folder for Managed DevOps Pools.

    1. View and download all pipeline/agent logs from Azure DevOps
    • Go to the pipeline run summary, select the job and step to inspect individual logs.
    • To download everything, select Download logs on the run.
    • The downloaded archive includes:
      • Worker diagnostic logs (worker_*.log) – what the worker process did on the agent.
      • Agent diagnostic logs (agent_*.log) – how the agent was configured and what happened when it ran (jobs, connections, completion status).
      • Other diagnostic files such as environment.txt and capabilities.txt that describe the environment and agent capabilities.

    These are the supported “agent/environment logs” for troubleshooting task failures, environment configuration, and internal agent behavior.

    1. Capture custom or low-level diagnostics from inside the job

    For deeper investigation of network and environment issues, the supported pattern is to capture additional diagnostics from within the job and publish them as artifacts:

    • Use PublishPipelineArtifact@1 to upload custom traces (for example, packet captures, ETW traces, perf logs) from the agent filesystem:
        - task: PublishPipelineArtifact@1
          inputs:
            targetPath: '$(Pipeline.Workspace)/s/trace'
            artifact: 'file_result.pcap'
            publishLocation: 'pipeline'
      
    • For Microsoft-hosted agents, there are documented patterns to capture ETW traces, ProcDump memory dumps, PerfView traces, and resource utilization; the same patterns can be applied to Managed DevOps Pools agents by running the tools in the job and publishing the outputs as artifacts.
    • Verbose logs also inject resource utilization lines (Agent environment resources) into the logs for each step, showing disk, memory, and CPU usage.
    1. Managed DevOps Pools resource-level diagnostics in Azure

    Managed DevOps Pools themselves are Azure resources and emit Azure Monitor diagnostic logs:

    • In the Managed DevOps Pool resource, go to MonitoringDiagnostic settings+ Add diagnostic setting.
    • Two categories are currently available:
      1. Managed DevOps Pools Metrics (AllMetrics) – metrics exposed via Azure Monitor.
      2. Resource Provisioning Logs – provisioning events for agents in the pool (same events visible in the Agents view).
    • Supported destinations today are:
      • Archive to a storage account
      • Stream to an event hub

    These logs help with pool/agent provisioning and scaling issues, but they do not replace per-job agent process logs.

    1. Answers to the specific questions
    2. Hidden diagnostic logs downloadable from Azure DevOps?
      • The supported mechanism is the standard Download logs for a pipeline run, combined with enabling system.debug/Agent.Diagnostic. There is no separate hidden log channel for Managed DevOps Pools documented beyond this.
    3. Internal MDP logs that customers can request from Microsoft Support?
      • Documentation only describes customer-accessible diagnostics: pipeline logs (including worker/agent diagnostics) and Azure Monitor diagnostic settings for the Managed DevOps Pool resource. Any additional internal service logs are not documented as customer-accessible.
    4. Equivalent to the _diag folder for self-hosted agents?
      • For self-hosted agents, _diag is where local agent logs live. In the managed scenario, the supported equivalent from a customer perspective is to use:
        • Pipeline log download (worker/agent logs, environment/capabilities files).
        • Agent.Diagnostic/system.debug for richer agent diagnostics.
        • Custom artifacts for any extra traces captured during the job.
      • A direct _diag folder on the underlying VM is not exposed in Managed DevOps Pools.
    5. Upcoming features to expose deeper logging for MDP?
      • The GA “What’s new” notes list new capabilities (more regions, Key Vault integration, proxy support, view agent IP address, move pools, Ubuntu 24.04 support), but do not announce additional agent log exposure features.

    Within the current documentation, the supported and recommended approach for troubleshooting Managed DevOps Pools is therefore:

    • Turn on verbose logging (system.debug = true) so Agent.Diagnostic is enabled.
    • Download the full pipeline logs archive to inspect worker and agent diagnostic logs, environment.txt, and capabilities.txt.
    • Use Azure Monitor diagnostic settings on the Managed DevOps Pool resource for provisioning/metrics data.
    • When deeper OS/network/process-level data is needed, collect it inside the job (for example, packet captures, ETW, ProcDump) and publish via PublishPipelineArtifact.

    References:

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.