Share via

Azure Managed Prometheus on Private AKS: TokenConfig.json Missing and Connection Refused to 127.0.0.1:55680

Manakkal. Subash 60 Reputation points
2026-02-16T21:17:55.4233333+00:00

Azure Managed Prometheus on Private AKS: TokenConfig.json Missing, ME Not Starting

Environment

  • Private AKS cluster (1.30.x) with userDefinedRouting outbound type
  • Hub-spoke network with centralized Private DNS zones (managed by security team)
  • Azure Monitor Workspace + AMPLS with private endpoint (prometheusMetrics sub-resource)
  • All traffic routes through Azure Firewall in hub

Problem

The ama-metrics pods are running but metrics are not ingested. Logs from prometheus-collector show:

No configuration present for the AKS resource
TokenConfig.json does not exist
dial tcp 127.0.0.1:55680: connect: connection refused

The Metrics Extension (ME) sidecar is not starting because it has no destination configuration.

Current State

az aks show --resource-group <rg> --name <aks> --query "azureMonitorProfile" -o json

Returns:

{
  "metrics": { "enabled": true, "kubeStateMetrics": { ... } }
}

No workspace reference exists - the addon is enabled but orphaned.

What We've Tried

  1. ✅ Created AMPLS and linked DCE to it
  2. ✅ Created private endpoint for AMW, DNS resolves to private IP
  3. ✅ Disabled and re-enabled metrics addon with --azure-monitor-workspace-resource-id
  4. ✅ Verified DCR association exists (MSProm-eastus-<aks>)
  5. ✅ Verified managed identity has Monitoring Metrics Publisher role

Result: azureMonitorProfile still doesn't show workspace link after re-enable.

Questions

Why doesn't re-enabling with --azure-monitor-workspace-resource-id populate the workspace reference? Is this a known issue with private clusters or userDefinedRouting?

What is the complete list of DNS records required for AMPLS with Managed Prometheus? We manage DNS centrally and need all FQDNs/zones.

Can we manually configure or repair the DCR/DCE chain if the automatic setup fails silently due to Azure Policy restrictions?

Thank you.
Subash

Azure Monitor
Azure Monitor

An Azure service that is used to collect, analyze, and act on telemetry data from Azure and on-premises environments.

0 comments No comments
{count} votes

Answer accepted by question author
  1. Q&A Assist
    2026-02-16T21:18:16.3666667+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    To address the issues you're facing with Azure Managed Prometheus on your private AKS cluster, here are some insights based on the context provided:

    1. Azure Monitor Workspace Link: The azureMonitorProfile not showing the Azure Monitor Workspace link could indicate that the metrics addon is enabled but not properly configured. Ensure that you are correctly specifying the azuremonitorworkspaceresourceid when enabling the metrics addon. If the workspace reference is still missing after re-enabling the addon, it may be beneficial to check for any Azure Policy restrictions that could be preventing the proper configuration.
    2. TokenConfig.json Creation: The TokenConfig.json file is typically created and populated by the addontokenadapter container. This file is essential for the metrics collector to authenticate and connect to the Azure Monitor. If this file is missing, it may indicate that the addontokenadapter container is not starting correctly. You should check the logs of this container for any errors that might explain why it is not functioning as expected.
    3. DNS Records for Private AKS with AMPLS: For a private AKS cluster using AMPLS, you generally need the following DNS records:
      • privatelink.eastus.prometheus.monitor.azure.com (for the ingestion endpoint)
      • Ensure that any other relevant FQDNs used by Azure Monitor services are also included in your DNS zone. You may need to consult Azure documentation or support for a complete list of required FQDNs.
    4. Known Issues with Metrics Addon: There can be issues with the metrics addon when using user-defined routing or when Azure Policies restrict DCR/DCE autocreation. Make sure that your firewall rules allow traffic to the necessary Azure Monitor endpoints, and verify that your policies do not block the required resources.
    5. Verifying DCR/DCE Configuration: To manually verify or repair the Data Collection Rule (DCR) and Data Collection Endpoint (DCE) configuration, you can check the Azure portal for the status of these resources. Ensure that they are correctly linked to your Azure Monitor Workspace and that the necessary permissions are granted.

    If the automatic setup fails due to enterprise policy restrictions, you may need to manually configure the metrics addon by creating the required resources (DCR, DCE) and linking them to your Azure Monitor Workspace.


    References:

    0 comments No comments

1 additional answer

Sort by: Most helpful
  1. Manakkal. Subash 60 Reputation points
    2026-02-16T23:08:59.9866667+00:00

    Internal firewall was blocking the handler.control.monitor.azure.com FQDN. Added a rule and it fixed the issue

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.