Query Prometheus metrics using the API and PromQL

Azure Monitor managed service for Prometheus, collects metrics from Azure Kubernetes clusters and stores them in an Azure Monitor workspace. PromQL (Prometheus query language), is a functional query language that allows you to query and aggregate time series data. Use PromQL to query and aggregate metrics stored in an Azure Monitor workspace.

This article describes how to query an Azure Monitor workspace using PromQL via the REST API. For more information on PromQL, see Querying prometheus.

Prerequisites

To query an Azure monitor workspace using PromQL, you need the following prerequisites:

  • An Azure Kubernetes cluster or remote Kubernetes cluster.
  • Azure Monitor managed service for Prometheus scraping metrics from a Kubernetes cluster.
  • An Azure Monitor workspace where Prometheus metrics are being stored.

Authentication

To query your Azure Monitor workspace, authenticate using Microsoft Entra ID. The API supports Microsoft Entra authentication using client credentials. Register a client app with Microsoft Entra ID and request a token.

To set up Microsoft Entra authentication, follow the steps below:

  1. Register an app with Microsoft Entra ID.
  2. Grant access for the app to your Azure Monitor workspace.
  3. Request a token.

Register an app with Microsoft Entra ID

  1. To register an app, follow the steps in Register an App to request authorization tokens and work with APIs

Allow your app access to your workspace

Assign the Monitoring Data Reader role your app so it can query data from your Azure Monitor workspace.

  1. Open your Azure Monitor workspace in the Azure portal.

  2. On the Overview page, take note of your Query endpoint for use in your REST request.

  3. Select Access control (IAM).

  4. Select Add, then Add role assignment from the Access Control (IAM) page.

    A screenshot showing the Azure Monitor workspace overview page.

  5. On the Add role Assignment page, search for Monitoring.

  6. Select Monitoring Data Reader, then select the Members tab.

    A screenshot showing the Add role assignment page.

  7. Select Select members.

  8. Search for the app that you registered and select it.

  9. Choose Select.

  10. Select Review + assign.

    A screenshot showing the Add role assignment, select members page.

You've created your app registration and have assigned it access to query data from your Azure Monitor workspace. You can now generate a token and use it in a query.

Request a token

Send the following request in the command prompt or by using a client like Insomnia or PowerShell’s Invoke-RestMethod

curl -X POST 'https://login.microsoftonline.com/<tenant ID>/oauth2/token' \
-H 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'grant_type=client_credentials' \
--data-urlencode 'client_id=<your apps client ID>' \
--data-urlencode 'client_secret=<your apps client secret>' \
--data-urlencode 'resource=https://prometheus.monitor.azure.com'

Sample response body:

{
    "token_type": "Bearer",
    "expires_in": "86399",
    "ext_expires_in": "86399",
    "expires_on": "1672826207",
    "not_before": "1672739507",
    "resource": "https:/prometheus.monitor.azure.com",
    "access_token": "eyJ0eXAiOiJKV1Qi....gpHWoRzeDdVQd2OE3dNsLIvUIxQ"
}

Save the access token from the response for use in the following HTTP requests.

Query endpoint

Find your Azure Monitor workspace's query endpoint on the Azure Monitor workspace overview page.

A screenshot sowing the query endpoint on the Azure Monitor workspace overview page.

Supported APIs

The following queries are supported:

Instant queries

For more information, see Instant queries

Path: /api/v1/query
Examples:

POST https://k8s-02-workspace-abcd.eastus.prometheus.monitor.azure.com/api/v1/query  
--header 'Authorization:  Bearer <access token>'
--header 'Content-Type: application/x-www-form-urlencoded' 
--data-urlencode 'query=sum( \
    container_memory_working_set_bytes \
    * on(namespace,pod) \
    group_left(workload, workload_type) \
    namespace_workload_pod:kube_pod_owner:relabel{ workload_type="deployment"}) by (pod)'

GET 'https://k8s02-workspace-abcd.eastus.prometheus.monitor.azure.com/api/v1/query?query=container_memory_working_set_bytes' 
--header 'Authorization:  Bearer <access token>'

Range queries

For more information, see Range queries
Path: /api/v1/query_range
Examples:

GET 'https://k8s02-workspace-abcd.eastus.prometheus.monitor.azure.com/api/v1/query_range?query=container_memory_working_set_bytes&start=2023-03-01T00:00:00.000Z&end=2023-03-20T00:00:00.000Z&step=6h'
--header 'Authorization: Bearer <access token>
POST 'https://k8s02-workspace-abcd.eastus.prometheus.monitor.azure.com/api/v1/query_range' 
--header 'Authorization:  Bearer <access token>'
--header 'Content-Type: application/x-www-form-urlencoded' 
--data-urlencode 'query=up' 
--data-urlencode 'start=2023-03-01T20:10:30.781Z' 
--data-urlencode 'end=2023-03-20T20:10:30.781Z' 
--data-urlencode 'step=6h'

Series

For more information, see Series

Path: /api/v1/series
Examples:

POST 'https://k8s02-workspace-abcd.eastus.prometheus.monitor.azure.com/api/v1/series' 
--header 'Authorization: Bearer <access token>
--header 'Content-Type: application/x-www-form-urlencoded' 
--data-urlencode 'match[]=kube_pod_info{pod="bestapp-123abc456d-4nmfm"}'

GET 'https://k8s02-workspace-abcd.eastus.prometheus.monitor.azure.com/api/v1/series?match[]=container_network_receive_bytes_total{namespace="default-1669648428598"}'

Labels

For more information, see Labels Path: /api/v1/labels
Examples:

GET 'https://k8s02-workspace-abcd.eastus.prometheus.monitor.azure.com/api/v1/labels'

POST 'https://k8s02-workspace-abcd.eastus.prometheus.monitor.azure.com/api/v1/labels'

Label values

For more information, see Label values
Path: /api/v1/label/__name__/values.

Note

__name__ is the only supported version of this API and returns all metric names. No other /api/v1/label/<label_name>/values are supported.

Example:

GET 'https://k8s02-workspace-abcd.eastus.prometheus.monitor.azure.com/api/v1/label/__name__/values'

For the full specification of OSS prom APIs, see Prometheus HTTP API.

API limitations

The following limitations are in addition to those detailed in the Prometheus specification.

  • Query must be scoped to a metric
    Any time series fetch queries (/series or /query or /query_range) must contain a __name__ label matcher. That is, each query must be scoped to a metric. There can only be one __name__ label matcher in a query.
  • Query /series does not support regular expression filter
  • Supported time range
    • /query_range API supports a time range of 32 days. This is the maximum time range allowed, including range selectors specified in the query itself. For example, the query rate(http_requests_total[1h] for last the 24 hours would actually mean data is being queried for 25 hours. This comes from the 24-hour range plus the 1 hour specified in query itself.
    • /series API fetches data for a maximum 12-hour time range. If endTime isn't provided, endTime = time.now(). If the time rage is greater than 12 hours, the startTime is set to endTime – 12h
  • Ignored time range
    Start time and end time provided with /labels and /label/__name__/values are ignored, and all retained data in the Azure Monitor workspace is queried.
  • Experimental features
    Experimental features such as exemplars aren't supported.

For more information on Prometheus metrics limits, see Prometheus metrics

Case sensitivity

Azure managed Prometheus is a case insensitive system. It treats strings, such as metric names, label names, or label values, as the same time series if they differ from another time series only by the case of the string.

Note

This behavior is different from native open source Prometheus, which is a case sensitive system.
Self-managed Prometheus instances running in Azure VMs, VMSSs, or Azure Kubernetes Service (AKS) clusters are case sensitive systems.

In Azure managed Prometheus the following time series are considered the same:

diskSize(cluster="eastus", node="node1", filesystem="usr_mnt")
diskSize(cluster="eastus", node="node1", filesystem="usr_MNT")

The above examples are a single time series in a time series database.

  • Any samples ingested against them are stored as if they're scraped/ingested against a single time series.
  • If the preceding examples are ingested with the same timestamp, one of them is randomly dropped.
  • The casing that's stored in the time series database and returned by a query is unpredictable. Different casing may be returned at different times for the same time series.
  • Any metric name or label name/value matcher present in the query is retrieved from time series database by making a case-insensitive comparison. If there's a case sensitive matcher in a query, it's automatically treated as a case-insensitive matcher when making string comparisons.

It's best practice to ensure that a time series is produced or scraped using a single consistent case.

In open source Prometheus, the above time series are treated as two different time series. Any samples scraped/ingested against them are stored separately.

Frequently asked questions

This section provides answers to common questions.

I am missing all or some of my metrics. How can I troubleshoot?

You can use the troubleshooting guide for ingesting Prometheus metrics from the managed agent here.

Why am I missing metrics that have two labels with the same name but different casing?

Azure managed Prometheus is a case insensitive system. It treats strings, such as metric names, label names, or label values, as the same time series if they differ from another time series only by the case of the string. For more information, see Prometheus metrics overview.

I see some gaps in metric data, why is this occurring?

During node updates, you might see a 1-minute to 2-minute gap in metric data for metrics collected from our cluster level collectors. This gap occurs because the node that the data runs on is being updated as part of a normal update process. This update process affects cluster-wide targets such as kube-state-metrics and custom application targets that are specified. This occurs when your cluster is updated manually or via autoupdate. This behavior is expected and occurs due to the node it runs on being updated. This behavior doesn't affect any of our recommended alert rules.

Next steps

Azure Monitor workspace overview
Manage an Azure Monitor workspace
Overview of Azure Monitor Managed Service for Prometheus
Query Prometheus metrics using Azure workbooks