Share via

Clarification on windowsexporter behavior in Azure Managed Prometheus (AKS)

Aniket Karkale 20 Reputation points
2026-03-25T13:52:31.72+00:00

Description

We are using Azure Managed Prometheus with an AKS cluster that includes Windows node pools.

We configured Istio metrics successfully using annotation-based scraping. However, we are facing ambiguity regarding Windows node metrics collection using windowsexporter = true in the ama-metrics-settings-configmap.


Current Configuration

In ama-metrics-settings-configmap:

windowsexporter = true
pod-annotation-based-scraping:
  podannotationnamespaceregex = "istio-system|istio-ingress|teams-recording"

Observed Behavior

Azure Monitor Agent (ama-metrics-win-node) attempts to scrape:

http://NODE_IP:9182/metrics

No Windows metrics (windows_cpu_*, windows_memory_*) are collected unless a custom windows-exporter DaemonSet is deployed manually.

After deploying a DaemonSet with:

hostNetwork: true

  Port `9182`
  
  → Metrics start appearing successfully.
  

Understanding (Please Confirm)

Based on testing, it appears that:

windowsexporter = true only enables scraping, but does NOT deploy or provide a Windows exporter.

Azure Managed Prometheus expects an existing endpoint on NODE_IP:9182.

Without manually deploying windows-exporter, no Windows Prometheus metrics are collected.


Request for Clarification

Could you please confirm:

Is it expected behavior that Azure Managed Prometheus does NOT deploy windows-exporter automatically?

Is manual deployment of a windows-exporter DaemonSet the recommended approach for collecting Windows node Prometheus metrics?

Are there any official Microsoft-supported images or configurations for Windows exporter in AKS?

Why does the configuration flag windowsexporter = true not provision or enable exporter automatically?

Is there any planned native support for Windows Prometheus metrics similar to Linux node metrics?


Impact

Documentation ambiguity between OSS Prometheus vs Azure Managed Prometheus behavior

Additional manual setup required for Windows monitoring

Potential confusion for customers expecting automatic enablement


Expected Outcome

Clear documentation or confirmation of intended behavior

Recommended best practice for Windows metrics collection in Managed Prometheus


Additional Context

  • AKS cluster with both Linux and Windows node pools

Istio metrics scraping working correctly via annotation-based configuration Issue isolated specifically to Windows node metrics

Description

We are using Azure Managed Prometheus with an AKS cluster that includes Windows node pools.

We configured Istio metrics successfully using annotation-based scraping. However, we are facing ambiguity regarding Windows node metrics collection using windowsexporter = true in the ama-metrics-settings-configmap.

Current Configuration

In ama-metrics-settings-configmap:

windowsexporter = true
pod-annotation-based-scraping:
  podannotationnamespaceregex = "istio-system|istio-ingress|teams-recording"

Observed Behavior

Azure Monitor Agent (ama-metrics-win-node) attempts to scrape:

http://NODE_IP:9182/metrics

No Windows metrics (windows_cpu_*, windows_memory_*) are collected unless a custom windows-exporter DaemonSet is deployed manually.

After deploying a DaemonSet with:

hostNetwork: true

  Port `9182`
  
  → Metrics start appearing successfully.
  

Understanding (Please Confirm)

Based on testing, it appears that:

windowsexporter = true only enables scraping, but does NOT deploy or provide a Windows exporter.

Azure Managed Prometheus expects an existing endpoint on NODE_IP:9182.

Without manually deploying windows-exporter, no Windows Prometheus metrics are collected.


Request for Clarification

Could you please confirm:

Is it expected behavior that Azure Managed Prometheus does NOT deploy windows-exporter automatically?

Is manual deployment of a windows-exporter DaemonSet the recommended approach for collecting Windows node Prometheus metrics?

Are there any official Microsoft-supported images or configurations for Windows exporter in AKS?

Why does the configuration flag windowsexporter = true not provision or enable exporter automatically?

Is there any planned native support for Windows Prometheus metrics similar to Linux node metrics?


Impact

Documentation ambiguity between OSS Prometheus vs Azure Managed Prometheus behavior

Additional manual setup required for Windows monitoring

Potential confusion for customers expecting automatic enablement


Expected Outcome

Clear documentation or confirmation of intended behavior

Recommended best practice for Windows metrics collection in Managed Prometheus


Additional Context

AKS cluster with both Linux and Windows node pools

Istio metrics scraping working correctly via annotation-based configuration

Issue isolated specifically to Windows node metrics

Azure Monitor
Azure Monitor

An Azure service that is used to collect, analyze, and act on telemetry data from Azure and on-premises environments.

0 comments No comments

Answer accepted by question author
  1. Q&A Assist
    2026-03-25T13:53:16.4066667+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    The behavior being observed matches the documented design for Windows metrics with Azure Monitor managed service for Prometheus on AKS today.

    1. windowsexporter = true controls scraping, not deployment

    For AKS clusters, the Windows metrics section explicitly instructs to manually install windows-exporter:

    "Manually install windows-exporter on AKS nodes to access Windows metrics by deploying the windows-exporter-daemonset YAML file."

    and then:

    "Enable Windows metrics Set the windowsexporter and windowskubeproxy Booleans to true in your metrics settings ConfigMap and apply it to the cluster."

    This shows that:

    • windowsexporter = true enables the scrape configuration in the Azure Monitor Agent (ama-metrics) so that it scrapes the Windows exporter endpoint.
    • It does not deploy or run the Windows exporter itself.
    • The agent expects a Windows exporter endpoint to already exist on the node (as seen with the scrape to NODE_IP:9182/metrics).

    This matches the observation that metrics only appear after deploying a windows-exporter DaemonSet.

    1. Manual windows-exporter DaemonSet is the expected and recommended approach (for AKS)

    The AKS guidance states:

    "Manually install windows-exporter on AKS nodes to access Windows metrics by deploying the windows-exporter-daemonset YAML file."

    and provides the official YAML and required collectors:

    • Use the upstream windows-exporter-daemonset.yaml from prometheus-community/windows_exporter.
    • Enable collectors: [defaults], container, memory, process, cpu_info.
    • Deploy via:
        kubectl apply -f windows-exporter-daemonset.yaml
      
    • Apply tolerations if Windows nodes are tainted.

    So for Azure Managed Prometheus on AKS:

    • Yes, it is expected that Managed Prometheus does not automatically deploy windows-exporter.
    • Manual deployment of a windows-exporter DaemonSet is the recommended and documented approach to expose Windows node metrics.
    1. Officially supported images/configurations

    The documentation for AKS and Arc-enabled clusters points to the prometheus-community/windows_exporter project and its DaemonSet manifest as the supported configuration:

    • AKS / Arc guidance both reference:
      • windows-exporter-daemonset.yaml from prometheus-community/windows_exporter.
      • Collectors to enable (container, memory, process, cpu_info, and defaults).

    Additionally, AKS Windows best practices state:

    "Windows Exporter is installed on all Windows nodes in certain regions. To view regional rollout, see AKS Github. With Managed Prometheus and Grafana, you can monitor default collectors included in Windows Exporter on AKS."

    and list the default collectors and port:

    "The default collectors included in Windows Exporter on AKS are: cpu, cpu_info, cs, container, logical_disk, memory, net, os, process, service, system, textfile. The metrics are exposed on port 19182 on the windows node."

    This indicates two modes depending on region/rollout:

    • In regions where AKS has built-in Windows Exporter rollout, the exporter is pre-installed on Windows nodes and exposes metrics on port 19182.
    • For Managed Prometheus integration, the documented path is still to deploy the windows-exporter-daemonset and configure scraping via windowsexporter = true.
    1. Why windowsexporter = true does not provision the exporter

    From the AKS and Arc docs, the responsibilities are clearly separated:

    • Exporter lifecycle (deployment, versioning, resource limits) is left to the cluster/operator (via DaemonSet or platform rollout).
    • Scraping and forwarding to Azure Monitor is handled by the Azure Monitor Metrics add-on (Managed Prometheus) and controlled via the metrics settings ConfigMap.

    Thus, windowsexporter = true is designed to:

    • Turn on the scrape job for Windows exporter endpoints.
    • Not manage or provision the exporter itself.
    1. Planned/"native" support vs Linux

    The current documented state:

    • For Arc-enabled Kubernetes with the Prometheus solution, the Windows exporter is part of the solution stack:

      "As part of the Prometheus solution in AKS enabled by Arc, the following components are deployed and automatically configured: ... Windows exporter. Once you deploy the Prometheus solution, ... the Windows exporter runs on each Windows node."

    • For Managed Prometheus on AKS, Windows metrics are described as Preview, with explicit manual steps:

      "Windows metric collection is enabled for AKS clusters as of version ... Onboarding to the Azure Monitor Metrics add-on enables the Windows DaemonSet pods to start running on your node pools. ... Manually install windows-exporter on AKS nodes ... Set the windowsexporter and windowskubeproxy Booleans to true ... Enable the recording rules ..."

    The documentation does not state that Managed Prometheus will automatically deploy and manage Windows exporter in the same way as Linux node metrics. Instead, it documents manual deployment plus configuration flags and recording rules as the supported pattern.

    1. Recommended best practice for the described scenario

    For an AKS cluster with Linux and Windows node pools using Azure Managed Prometheus:

    1. Deploy Windows exporter:
      • Use the official windows-exporter-daemonset.yaml from prometheus-community/windows_exporter.
      • Ensure hostNetwork: true and the expected port (e.g., 9182 if that is what ama-metrics is scraping in the environment).
      • Add tolerations for Windows node taints if required.
    2. Enable scraping and rules:
      • In ama-metrics-settings-configmap, set:
        • windowsexporter = true
        • windowskubeproxy = true (if kube-proxy metrics are needed).
      • Enable Windows recording rules:
        • If onboarding via CLI, use --enable-windows-recording-rules.
        • If using ARM/Bicep/Policy, set enableWindowsRecordingRules = true or deploy the provided Windows recording rules ARM template.
    3. Use Managed Grafana dashboards:
      • The Windows-specific dashboards listed in AKS best practices (e.g., "Kubernetes / Compute Resources / Node (Windows)") rely on these metrics and recording rules.

    This aligns with the behavior being observed and with the current documentation.


    References:


0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.