Azure Workbook Tile Returns “Query Returned No Results” Despite Valid Perf and InsightsMetrics Data

Niket Kumar Singh 695 Reputation points
2025-06-16T18:23:16.8166667+00:00

Hello,

I am seeking assistance regarding an issue with our Azure Monitor workbook (Key Metrics) where tiles designed to display top CPU-consuming processes per VM intermittently return “The query returned no results” — despite validated Perf and InsightsMetrics data being present.

Setup:

Workspace: Single Log Analytics workspace

Telemetry source: Azure VMs connected via DCR Connected VMs: AppServer2, AppServer3, WebServer3 (only this VM consistently returns data), and others

Workbook configuration:

  • Auto-refresh: 5 min
  • Query time window: Last 30 min
  • Query: Join of Perf and InsightsMetrics to compute normalized CPU

Example KQL (used in tiles)/ workbook:

Perf
| where ObjectName == "Process" and CounterName == "% Processor Time" and InstanceName != "_Total" and InstanceName != "Idle"
| join kind=inner (
    InsightsMetrics
    | where Namespace == "Processor"
    | where Name == "UtilizationPercentage"
    | where Computer contains "AppServer2"
    | extend tagValues = parse_json(Tags)
    | extend totalCpus = toint(tagValues["vm.azm.ms/totalCpus"])
    | project TimeGenerated, Computer, TotalCPUs = totalCpus
) on Computer
| summarize MaxCPU = max(CounterValue) by Computer, TotalCPUs, bin(TimeGenerated, 1m), InstanceName
| extend NormalizedCPU = MaxCPU / (TotalCPUs * 100)
| summarize MaxNormalizedCPU = max(NormalizedCPU) by Computer, InstanceName
| order by MaxNormalizedCPU desc
| take 10



Tiles typically filter per VM (e.g., Computer == "AppServer2").

What we observed:

Perf, InsightsMetrics, Heartbeat data are present

  • We validated using:
Perf
| where TimeGenerated > ago(1h)
| where ObjectName == "Process" and CounterName == "% Processor Time"
| summarize Count = count() by Computer

InsightsMetrics
| where TimeGenerated > ago(1h)
| where Namespace == "Processor" and Name == "UtilizationPercentage"
| extend tags = parse_json(Tags)
| extend TotalCPUs = toint(tags["vm.azm.ms/totalCpus"])
| where isnotnull(TotalCPUs)
| summarize by Computer, TotalCPUs

Heartbeat
| where TimeGenerated > ago(1h)
| summarize arg_max(TimeGenerated, ResourceId) by Computer

All VMs report data as expected in all three tables.

Naming discrepancies in Computer field between tables

We noticed that Computer values differ slightly between Perf, InsightsMetrics, and Heartbeat.

  • Example mappings:
    • DPSERVERl/virtualMachines/DPSERVER-N
    • AppServer/virtualMachines/AppServer2.xxx.xx.
    • WebServer3/virtualMachines/WebServer3-ixxxxx.xxxx
    This mismatch could impact joins when filtering by Computer, especially at the tile level.

Join query works in Log Analytics: When run in Log Analytics, the join query returns expected results across all VMs.

Workbook tile-level filtering fails: When the same query is used in a workbook with per-VM tile filters (e.g., Computer == "AppServer2"), most tiles return “The query returned no results” except for WebServer3.

What we tried:

Explicit binning on both tables (bin(TimeGenerated, 1m))

Introducing a VM map layer from Heartbeat:

// Step 1: Map VMs
let VMMap = Heartbeat
| where TimeGenerated > ago(1h)
| summarize arg_max(TimeGenerated, ResourceId) by Computer
| extend VMName = extract(@"virtualMachines/([^/]+)", 1, ResourceId);

// Step 2: Process Perf Data
let PerfData = Perf
| where TimeGenerated > ago(1h)
| where ObjectName == "Process" and CounterName == "% Processor Time"
| where InstanceName !in ("_Total", "Idle")
| summarize MaxCPU = max(CounterValue) by Computer, InstanceName, TimeBin = bin(TimeGenerated, 1m);

// Step 3: CPU core data
let CPUData = InsightsMetrics
| where TimeGenerated > ago(1h)
| where Namespace == "Processor" and Name == "UtilizationPercentage"
| extend tags = parse_json(Tags)
| extend TotalCPUs = toint(tags["vm.azm.ms/totalCpus"])
| summarize TotalCPUs = max(TotalCPUs) by Computer, TimeBin = bin(TimeGenerated, 1m);

// Step 4: Join everything
PerfData
| join kind=inner (CPUData) on Computer, TimeBin
| join kind=leftouter (VMMap) on Computer
| extend NormalizedCPU = MaxCPU / (TotalCPUs * 100)
| project TimeBin, VMName, Computer, InstanceName, MaxCPU, TotalCPUs, NormalizedCPU
| order by NormalizedCPU desc
| take 20


This approach failed in the workbook (Failed to resolve table or column expression).

Questions:

Could the join behavior for per-VM filtered workbook tiles behave differently from direct Log Analytics queries?

Could TimeGenerated alignment (even with binning) still cause joins to fail at tile level?

Is this a known change or regression in workbook join behavior?

Is the Heartbeat + VMMap join pattern a recommended approach to align Perf + InsightsMetrics for tiles?

Are there better practices to design per-VM tiles (parameterized or static) that join Perf + InsightsMetrics reliably?

Does Microsoft recommend pre-aggregating or using a custom table instead of direct joins in this scenario?

Additional context:

DCR config verified (% Processor Time counter is enabled)

No data ingestion delay — data visible in both tables for the same window

The issue appears specific to workbook per-VM tiles

Azure Monitor
Azure Monitor
An Azure service that is used to collect, analyze, and act on telemetry data from Azure and on-premises environments.
3,645 questions
{count} votes

Accepted answer
  1. Pranay Reddy Madireddy 6,180 Reputation points Microsoft External Staff Moderator
    2025-06-17T00:28:56.9366667+00:00

    Hi Niket Kumar Singh

    The differences in the Computer field values between the Perf, InsightsMetrics, and Heartbeat tables can significantly impact the join operation. If the Computer values do not match exactly, the join will fail, leading to no results being returned in the workbook tiles.

    The join behavior in workbook tiles may differ from that in direct Log Analytics queries. When filtering at the tile level, if the Computer field does not match due to naming discrepancies, it will result in no data being displayed.

    Even with explicit binning, if the TimeGenerated values do not align perfectly across the tables, it could cause the join to fail. Ensure that the time window for all tables is consistent and that the binning is applied correctly.

    The filtering applied at the tile level may not be functioning as expected. If the filter for a specific VM (e.g., Computer == "AppServer2") does not correspond to the actual Computer value in the joined dataset, it will return no results.

    To improve reliability in your workbook tiles:

    Consider creating a mapping table to standardize the Computer names across the different data sources before performing the join.

    Test the queries in Log Analytics to ensure they return the expected results before implementing them in the workbook.

    If possible, pre-aggregate the data or use a custom table to simplify the join logic.

    https://learn.microsoft.com/en-us/troubleshoot/azure/virtual-desktop/troubleshoot-insights?tabs=monitor#my-data-isn%27t-displaying-properly
    https://learn.microsoft.com/en-us/entra/identity/conditional-access/howto-conditional-access-insights-reporting#configure-a-conditional-access-policy-in-report-only-mode

    https://learn.microsoft.com/en-us/azure/azure-monitor/visualize/troubleshoot-workbooks

    Kindly let us know if the above helps or you need further assistance on this issue.


    Please do consider to “up-vote” wherever the information provided helps you, this can be beneficial to other community members.

    1 person found this answer helpful.

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.