Azure Monitor Frequently Asked Questions

This Microsoft FAQ is a list of commonly asked questions about Azure Monitor. If you have any other questions, go to the discussion forum and post your questions. When a question is frequently asked, we add it to this article so that it can be found quickly and easily.

General

What is Azure Monitor?

Azure Monitor is a service in Azure that provides performance and availability monitoring for applications and services in Azure, other cloud environments, or on-premises. Azure Monitor collects data from multiple sources into a common data platform where it can be analyzed for trends and anomalies. Rich features in Azure Monitor assist you in quickly identifying and responding to critical situations that might affect your application.

What's the difference between Azure Monitor, Log Analytics, and Application Insights?

In September 2018, Microsoft combined Azure Monitor, Log Analytics, and Application Insights into a single service to provide powerful end-to-end monitoring of your applications and the components they rely on. Features in Log Analytics and Application Insights haven't changed, although some features have been rebranded to Azure Monitor to better reflect their new scope. The log data engine and query language of Log Analytics is now referred to as Azure Monitor Logs. See Azure Monitor terminology updates.

What does Azure Monitor cost?

Features of Azure Monitor that are automatically enabled, such as collection of metrics and activity logs, are provided at no cost. There's a cost associated with other features, such as log queries and alerting. See the Azure Monitor pricing page for detailed pricing information.

How do I enable Azure Monitor?

Azure Monitor is enabled the moment that you create a new Azure subscription, and activity log and platform metrics are automatically collected. Create diagnostic settings to collect more detailed information about the operation of your Azure resources, and add monitoring solutions and insights to provide extra analysis on collected data for particular services.

How do I access Azure Monitor?

Access all Azure Monitor features and data from the Monitor menu in the Azure portal. The Monitoring section of the menu for different Azure services provides access to the same tools with data filtered to a particular resource. Azure Monitor data is also accessible for various scenarios by using the Azure CLI, PowerShell, and a REST API.

Is there an on-premises version of Azure Monitor?

No. Azure Monitor is a scalable cloud service that processes and stores large amounts of data, although Azure Monitor can monitor resources that are on-premises and in other clouds.

Can Azure Monitor also monitor on-premises resources?

Yes. In addition to collecting monitoring data from Azure resources, Azure Monitor can collect data from virtual machines and applications in other clouds and on-premises. See Sources of monitoring data for Azure Monitor.

Does Azure Monitor integrate with System Center Operations Manager?

You can connect your existing System Center Operations Manager management group to Azure Monitor to collect data from agents into Azure Monitor Logs. This capability allows you to use log queries and solutions to analyze data collected from agents. You can also configure existing System Center Operations Manager agents to send data directly to Azure Monitor. See Connect Operations Manager to Azure Monitor.

What IP addresses does Azure Monitor use?

See IP addresses used by Application Insights and Log Analytics for a listing of the IP addresses and ports required for agents and other external resources to access Azure Monitor.

Monitoring data

Where does Azure Monitor get its data?

Azure Monitor collects data from various sources. These sources include logs and metrics from the Azure platform and resources, custom applications, and agents running on virtual machines. Other services such as Microsoft Defender for Cloud and Network Watcher collect data into a Log Analytics workspace so that it can be analyzed with Azure Monitor data. You can also send custom data to Azure Monitor by using the REST API for logs or metrics. See Sources of monitoring data for Azure Monitor.

What data does Azure Monitor collect?

Azure Monitor collects data from various sources into logs or metrics. Each type of data has its own relative advantages, and each supports a particular set of features in Azure Monitor. There's a single metrics database for each Azure subscription, and you can create multiple Log Analytics workspaces to collect logs depending on your requirements. See Azure Monitor data platform.

Is there a maximum amount of data that I can collect in Azure Monitor?

There's no limit to the amount of metric data you can collect, but this data is stored for a maximum of 93 days. See Retention of metrics. There's no limit on the amount of log data that you can collect, but the pricing tier you choose for the Log Analytics workspace might affect the limit. See Pricing details.

How do I access data collected by Azure Monitor?

Insights and solutions provide a custom experience for working with data stored in Azure Monitor. You can work directly with log data by using a log query written in Kusto Query Language (KQL). In the Azure portal, you can write and run queries and interactively analyze data by using Log Analytics. Analyze metrics in the Azure portal with the metrics explorer. See Analyze log data in Azure Monitor and Get started with Azure metrics explorer.

Why am I seeing duplicate records in Azure Monitor Logs?

Occasionally, you might notice duplicate records in Azure Monitor Logs. This duplication is typically from one of the following two conditions:

  • Components in the pipeline have retries to ensure reliable delivery at the destination. Occasionally, this capability might result in duplicates for a small percentage of telemetry items.
  • If the duplicate records come from a virtual machine, you might have both the Log Analytics agent and Azure Monitor Agent installed. If you still need the Log Analytics agent installed, configure the Log Analytics workspace to no longer collect data that's also being collected by the data collection rule used by Azure Monitor Agent.

Solutions and insights

What's an insight in Azure Monitor?

Insights provide a customized monitoring experience for particular Azure services. They use the same metrics and logs as other features in Azure Monitor but might collect extra data and provide a unique experience in the Azure portal. See Insights in Azure Monitor.

To view insights in the Azure portal, see the Insights section of the Monitor menu or the Monitoring section of the service's menu.

What's a solution in Azure Monitor?

Monitoring solutions are packaged sets of logic for monitoring a particular application or service based on Azure Monitor features. They collect log data in Azure Monitor and provide log queries and views for their analysis by using a common experience in the Azure portal. See Monitoring solutions in Azure Monitor.

To view solutions in the Azure portal, select More in the Insights section of the Monitor menu. Select Add to add more solutions to the workspace.

Logs

What's the difference between Azure Monitor Logs and Azure Data Explorer?

Azure Data Explorer is a fast and highly scalable data exploration service for log and telemetry data. Azure Monitor Logs is built on top of Azure Data Explorer and uses the same Kusto Query Language (KQL) with some minor differences. See Azure Monitor log query language differences.

How do I retrieve log data?

All data is retrieved from a Log Analytics workspace by using a log query written using Kusto Query Language (KQL). You can write your own queries or use solutions and insights that include log queries for a particular application or service. See Overview of log queries in Azure Monitor.

Can I delete data from a Log Analytics workspace?

Data is removed from a workspace according to its retention period. You can delete specific data for privacy or compliance reasons. See Export and delete private data.

Is Log Analytics storage immutable?

Data in database storage can't be altered after it's ingested, but it can be deleted via the Purge API path that's used for deleting private data. Although data can't be altered, some certifications require that data is kept immutable, and can't be changed or deleted in storage. Data immutability can be achieved by using data export to a storage account that's configured as immutable storage.

What's a Log Analytics workspace?

All log data collected by Azure Monitor is stored in a Log Analytics workspace. A workspace is essentially a container where log data is collected from various sources. You might have a single Log Analytics workspace for all your monitoring data, or you might have requirements for multiple workspaces. See Design a Log Analytics workspace configuration.

Can you move an existing Log Analytics workspace to another Azure subscription?

You can move a workspace between resource groups or subscriptions but not to a different region. See Move a Log Analytics workspace to a different subscription or resource group.

Why can't I see Query Explorer and Save buttons in Log Analytics?

Query Explorer, Save, and New alert rule buttons aren't available when the query scope is set to a specific resource. To create alerts or save or load a query, Log Analytics must be scoped to a workspace. To open Log Analytics in workspace context, select Logs from the Azure Monitor menu. The last used workspace is selected, but you can select any other workspace. See Log query scope and time range in Azure Monitor Log Analytics.

Why am I getting the error "Register resource provider Microsoft.Insights for this subscription to enable this query" when I open Log Analytics from a VM?

Many resource providers are automatically registered, but you might need to register some resource providers manually. The scope for registration is always the subscription. See Resource providers and types.

Why am I not getting an access error message when I open Log Analytics from a VM?

To view VM logs, you need read permission for the workspace that stores the VM logs. In these cases, an administrator must grant your permission in Azure.

Why can't I create an AzureDiagnostics table via template or modify schema via an API?

AzureDiagnostics is a unique table that Log Analytics creates from data ingestion. Its schema can't be configured. A retention policy can be applied after the table is generated.

Metrics

Why are metrics from the guest OS of my Azure virtual machine not showing up in metrics explorer?

Platform metrics are collected automatically for Azure resources. You must perform some configuration, though, to collect metrics from the guest OS of a virtual machine. For a Windows VM, install the diagnostic extension and configure the Azure Monitor sink as described in Install and configure Azure Diagnostics extension for Windows (WAD). For Linux, install the Telegraf agent as described in Collect custom metrics for a Linux VM with the InfluxData Telegraf agent.

Prometheus

What's an Azure Monitor workspace?

Prometheus metrics collected by managed service for Prometheus are stored in an Azure Monitor workspace. It's essentially a container where Prometheus metrics from various sources are stored. You might have a single Azure Monitor workspace for all of your Prometheus metrics or you might have requirements for multiple workspaces. See Azure Monitor workspace overview.

What's the difference between an Azure Monitor workspace and a Log Analytics workspace?

An Azure Monitor workspace is a unique environment for data collected by Azure Monitor. Each workspace has its own data repository, configuration, and permissions. Azure Monitor workspaces will eventually contain all metrics collected by Azure Monitor, including native metrics. Currently, the only data hosted by an Azure Monitor workspace is Prometheus metrics. See Azure Monitor workspace overview.

How do I retrieve Prometheus metrics?

All data is retrieved from an Azure Monitor workspace by using queries that are written in Prometheus Query Language (PromQL). You can write your own queries, use queries from the open source community, and use Grafana dashboards that include PromQL queries. See the Prometheus project.

Can I delete Prometheus metrics from an Azure Monitor workspace?

Data is removed from the Azure Monitor workspace according to its data retention period, which is 18 months.

Can I view my Prometheus metrics in Azure Monitor metrics explorer?

Metrics explorer in Azure Monitor doesn't currently support visualizing Prometheus metric data. Consider using Azure Managed Grafana to visualize your Prometheus metrics.

Can I use Azure Managed Grafana in a different region than my Azure Monitor workspace and managed service for Prometheus?

Yes. When you use managed service for Prometheus, you can create your Azure Monitor workspace in any of the supported regions. Your Azure Kubernetes Service clusters can be in any region and send data into an Azure Monitor workspace in a different region. Azure Managed Grafana can also be in a different region than where you created your Azure Monitor workspace.

When I use managed service for Prometheus, can I store data for more than one cluster in an Azure Monitor workspace?

Yes. Managed service for Prometheus is intended to enable scenarios where you can store data from several Azure Kubernetes Service clusters in a single Azure Monitor workspace. See Azure Monitor workspace overview.

What types of resources can send Prometheus metrics to managed service for Prometheus?

Our agent can be used on Azure Kubernetes Service clusters and Azure Arc-enabled Kubernetes clusters. It's installed as a managed add-on for AKS clusters and an extension for Azure Arc-enabled Kubernetes clusters and you can configure it to collect the data you want. You can also configure remote write on Kubernetes clusters running in Azure, another cloud, or on-premises by following our instructions for enabling remote write.

If you use the Azure portal to enable Prometheus metrics collection and install the AKS add-on or Azure Arc-enabled Kubernetes extension from the Insights page of your cluster, it enables logs collection into Log Analytics and Prometheus metrics collection into managed service for Prometheus.

Does enabling managed service for Prometheus on my Azure Kubernetes Service cluster also enable Container insights?

You have options for how you can collect your Prometheus metrics. If you use the Azure portal and enable Prometheus metrics collection and install the Azure Kubernetes Service (AKS) add-on from the Azure Monitor workspace UX, it won't enable Container insights and collection of log data. When you go to the Insights page on your AKS cluster, you're prompted to enable Container insights to collect log data.

If you use the Azure portal and enable Prometheus metrics collection and install the AKS add-on from the Insights page of your AKS cluster, it enables log collection into a Log Analytics workspace. and Prometheus metrics collection into an Azure Monitor workspace.

I am missing all or some of my metrics. How can I troubleshoot?

You can use the troubleshooting guide for ingesting Prometheus metrics from the managed agent here.

Why am I missing metrics that have two labels with the same name but different casing?

Metrics that have two label names that are the same except for their casing will be treated as having duplicate label names. These time series will be dropped upon ingestion since the two labels are seen as the same. For example, the time series my_metric{ExampleLabel="label_value_0", examplelabel="label_value_1} will be dropped due to duplicate labels since ExampleLabel and examplelabel will be seen as the same label name.

I see some gaps in metric data, why is this occurring?

During node updates you may see a 1 to 2 minute gap in metric data for metrics collected from our cluster level collectors because the node that it runs on is being updated as part of a normal update process. This affects cluster-wide targets such as kube-state-metrics and custom application targets that are specified. This occurs when your cluster is updated manually or via auto-update. This is expected behavior that occurs due to the node it runs on being updated. None of our recommended alert rules are affected by this behavior.

Change Analysis

Does using Change Analysis incur cost?

You can use Change Analysis at no extra cost. Enable the Microsoft.ChangeAnalysis resource provider, and anything supported by Change Analysis is open to you.

How can I enable Change Analysis for a web application?

Enable Change Analysis for web application in guest changes by using the Diagnose and solve problems tool.

Alerts

What's an alert in Azure Monitor?

Alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues before the users of your system notice them. There are multiple kinds of alerts:

  • Metric: Metric value exceeds a threshold.
  • Log query: Results of a log query match defined criteria.
  • Activity log: Activity log event matches defined criteria.
  • Web test: Results of availability test match defined criteria.

See Overview of alerts in Azure.

What's an action group?

An action group is a collection of notifications and actions that can be triggered by an alert. Multiple alerts can use a single action group allowing you to use common sets of notifications and actions. See Create and manage action groups in the Azure portal.

What's an action rule?

An action rule allows you to modify the behavior of a set of alerts that match a certain criteria. This rule allows you to perform such requirements as disabling alert actions during a maintenance window. You can also apply an action group to a set of alerts rather than applying them directly to the alert rules. See Action rules.

Agents

Does Azure Monitor require an agent?

An agent is only required to collect data from the operating system and workloads in virtual machines. The virtual machines can be located in Azure, another cloud environment, or on-premises. See Azure Monitor Agent overview.

What's the difference between the Azure Monitor agents?

Azure Monitor Agent is the new, improved agent that consolidates features from all the other legacy monitoring agents. It also provides extra benefits like centralized data collection, filtering, and multihoming. See Azure Monitor Agent overview.

The legacy agents include:

  • Azure Diagnostics extension is for Azure Virtual Machines and collects data to Azure Monitor Metrics, Azure Storage, and Azure Event Hubs.
  • The Log Analytics agent is for virtual machines in Azure, another cloud environment, or on-premises and collects data to Azure Monitor Logs. These agents will be deprecated by August 2024.

Does my agent traffic use my Azure ExpressRoute connection?

Traffic to Azure Monitor uses the Microsoft peering ExpressRoute circuit. See ExpressRoute documentation for a description of the different types of ExpressRoute traffic.

How can I confirm that the Log Analytics agent can communicate with Azure Monitor?

From Control Panel on the agent computer, select Security & Settings > Microsoft Monitoring Agent. Under the Azure Log Analytics (OMS) tab, a green check mark icon confirms that the agent can communicate with Azure Monitor. A yellow warning icon means the agent is having issues. One common reason is the Microsoft Monitoring Agent service has stopped. Use service control manager to restart the service.

How do I stop the Log Analytics agent from communicating with Azure Monitor?

For agents connected to Log Analytics directly, open Control Panel and select Microsoft Monitoring Agent. Under the Azure Log Analytics (OMS) tab, remove all workspaces listed. In System Center Operations Manager, remove the computer from the Log Analytics managed computers list. Operations Manager updates the configuration of the agent to no longer report to Log Analytics.

How much data is sent per agent?

The amount of data sent per agent depends on:

  • The solutions you've enabled.
  • The number of logs and performance counters being collected.
  • The volume of data in the logs.

See Analyze usage in a Log Analytics workspace.

For computers that are able to run the WireData agent, use the following query to see how much data is being sent:

WireData
| where ProcessName == "C:\\Program Files\\Microsoft Monitoring Agent\\Agent\\MonitoringHost.exe"
| where Direction == "Outbound"
| summarize sum(TotalBytes) by Computer 

How much network bandwidth is used by the Microsoft Monitoring Agent when it sends data to Azure Monitor?

Bandwidth is a function of the amount of data sent. Data is compressed as it's sent over the network.

How can I be notified when data collection from the Log Analytics agent stops?

Use the steps described in Create a new log alert to be notified when data collection stops. Use the following settings for the alert rule:

  • Define alert condition: Specify your Log Analytics workspace as the resource target.
  • Alert criteria:
    • Signal Name: Custom log search.
    • Search query: Heartbeat | summarize LastCall = max(TimeGenerated) by Computer | where LastCall < ago(15m).
    • Alert logic: Based on number of results, Condition Greater than, Threshold value 0.
    • Evaluated based on: Period (in minutes) 30, Frequency (in minutes) 10.
  • Define alert details:
    • Name: Data collection stopped.
    • Severity: Warning.

Specify an existing or new action group so that when the log alert matches criteria, you're notified if you have a heartbeat missing for more than 15 minutes.

What are the firewall requirements for Log Analytics agents?

Azure Monitor Agent

Why should I use Azure Monitor Agent or migrate from the Log Analytics agent to Azure Monitor Agent?

Azure Monitor Agent replaces the Log Analytics agent. It's also known as Microsoft Monitoring Agent. It replaces the Azure Diagnostics extension and the Telegraf Agent. Azure Monitor Agent offers a higher rate of EPS with a lower footprint. It provides enhanced filtering features, scalable deployment management, and configuration by using data collection rules and Azure policies.

Azure Monitor Agent hasn't yet reached full parity with the Microsoft Monitoring Agent, but we continue to add features and support. The Microsoft Monitoring Agent will be retired on August 31, 2024.

See Azure Monitor Agent overview.

What's the upgrade path from Log Analytics agents to Azure Monitor Agent? How do we migrate?

What's the upgrade path from the Log Analytics agent to Azure Monitor Agent for monitoring System Center Operations Manager? Can we use Azure Monitor Agent for System Center Operations Manager scenarios?

Here's how Azure Monitor Agent affects the two System Center Operations Manager monitoring scenarios:

  • Scenario 1: Monitoring the Windows operating system of System Center Operations Manager. The upgrade path is the same as for any other machine. You can migrate from the Microsoft Monitoring Agent (versions 2016 and 2019) to Azure Monitor Agent as soon as your required parity features are available on Azure Monitor Agent.
  • Scenario 2: Onboarding or connecting System Center Operations Manager to Log Analytics workspaces. Use a System Center Operations Manager connector for Log Analytics/Azure Monitor. Neither the Microsoft Monitoring Agent nor Azure Monitor Agent is required to be installed on the Operations Manager management server. As a result, there's no impact to this use case from an Azure Monitor Agent perspective.

Will Azure Monitor Agent support data collection for the various Log Analytics solutions and Azure services like Microsoft Defender for Cloud and Microsoft Sentinel?

Review the list of Azure Monitor Agent extensions currently available in preview. These extensions are the same solutions and services now available by using the new Azure Monitor Agent instead. You might see more extensions getting installed for the solution or service to collect extra data or perform transformation or processing as required for the solution or service. Then use Azure Monitor Agent to route the final data to Azure Monitor.

The following diagram explains the new extensibility architecture.

Diagram that shows extensions architecture.

Which Log Analytics solutions are supported on the new Azure Monitor Agent?

How can I collect Windows security events by using the new Azure Monitor Agent?

There are two ways you can collect Security events using the new agent, when sending to a Log Analytics workspace:

  • You can use AMA to natively collect Security Events, same as other Windows Events. These flow to the 'Event' table in your Log Analytics workspace. If you want Security Events to flow into the 'SecurityEvent' table instead, you can create the required DCR with PowerShell or with Azure Policy.
  • If you have Microsoft Sentinel enabled on the workspace, the security events flow via Azure Monitor Agent into the SecurityEvent table instead (the same as using the Log Analytics agent). This scenario always requires the solution to be enabled first.

Can Azure Monitor Agent and the Log Analytics agent coexist side by side?

Yes they can, but with certain considerations. Read more about agent coexistence.

Will I duplicate events if I use Azure Monitor Agent and the Log Analytics agent on the same machine?

If you're collecting the same events with both agents, duplication occurs. This duplication could be the legacy agent collecting redundant data from the workspace configuration data, which is collected by the data collection rule. Or you might be collecting security events with the legacy agent and enable Windows security events with Azure Monitor Agent connectors in Microsoft Sentinel.

Limit duplication events to only the time when you transition from one agent to the other. After you've fully tested the data collection rule and verified its data collection, disable collection for the workspace and disconnect any Microsoft Monitoring Agent data connectors.

Is Azure Monitor Agent at parity with the Log Analytics agents?

Review the current limitations of Azure Monitor Agent when compared with Log Analytics agents.

Does Azure Monitor Agent support non-Azure environments like other clouds or on-premises?

Both on-premises machines and machines connected to other clouds are supported for servers today, after you have the Azure Arc agent installed. For purposes of running Azure Monitor Agent and data collection rules, the Azure Arc requirement comes at no extra cost or resource consumption. The Azure Arc agent is only used as an installation mechanism. You don't need to enable the paid management features if you don't want to use them.

Yes it does, via data collection endpoints created and added to an Azure Monitor Private Link Scope. Walk through the setup steps.

Does Azure Monitor Agent support auditd logs on Linux or AUOMS?

Yes, but you need to onboard to Defender for Cloud (previously Azure Security Center). It's available as an extension to Azure Monitor Agent, which collects Linux auditd logs via AUOMS.

Is Azure Arc required for Azure Active Directory-joined machines?

No. Azure Active Directory-joined (or hybrid Azure Active Directory-joined) machines running Windows 10 or 11 (client OS) do not require Azure Arc to be installed. Instead, you can use the Windows MSI installer for Azure Monitor Agent, which is currently available in preview.

Why do I need to install the Azure Arc Connected Machine agent to use Azure Monitor Agent?

Azure Monitor Agent authenticates to your workspace via managed identity, which is created when you install the Connected Machine agent. Managed Identity is a more secure and manageable authentication solution from Azure. The legacy Log Analytics agent authenticated by using the workspace ID and key instead, so it didn't need Azure Arc.

What impact does installing the Azure Arc Connected Machine agent have on my non-Azure machine?

There's no impact to the machine after the Azure Arc Connected Machine agent is installed. It hardly uses system or network resources and is designed to have a low footprint on the host where it's run.

What types of machines does the new Azure Monitor Agent support?

You can directly install them on virtual machines, virtual machine scale sets, and Azure Arc-enabled servers. You can also install them on devices like workstations and desktops running Windows 10 or 11 by using the Windows MSI installer for Azure Monitor Agent. It's currently available in preview.

Can we filter events by using event ID? Is more granular event filtering possible by using the new Azure Monitor Agent?

Yes. You can use Xpath queries for filtering Windows event logs. Learn more. For performance counters, you can specify the counters you want to collect and exclude the ones you don't need. For Syslog on Linux, you can choose facilities and the log level for each facility to collect.

Does the new Azure Monitor Agent support sending data to Azure Event Hubs and Azure Storage accounts?

No, not yet. The new agent along with data collection rules will support sending data to both Event Hubs and Azure Storage accounts in the future when Azure Monitor Agent starts convergence with the diagnostics extensions.

Does the new Azure Monitor Agent have hardening support for Linux?

Hardening support for Linux isn't available yet.

What roles do I need to create for a data collection rule that collects events from my servers?

If I create data collection rules that contain the same event ID and associate it to the same VM, will the events be duplicated?

Yes. To avoid duplication, make sure the event selection you make in your data collection rules doesn't contain duplicate events.

How can I validate my Xpath queries on Azure Monitor Agent?

Use the Get-WinEvent PowerShell cmdlet -FilterXPath parameter to test the validity of an XPath query. For more information, see the tip provided in the Windows agent-based connections instructions. The Get-WinEvent PowerShell cmdlet supports up to 23 expressions. Azure Monitor data collection rules support up to 20. Also, > and < characters must be encoded as &gt; and &lt; in your data collection rule.

Visualizations

Why can't I see View Designer?

View Designer is only available for users assigned with Contributor permissions or higher in the Log Analytics workspace.

Application Insights

Configuration problems

I'm having trouble setting up my:

I get no data from my server:

How many Application Insights resources should I deploy:

See How many Application Insights resources should I deploy?.

Can I use Application Insights with the following apps and services?

Is it free?

Yes, for experimental use. In the basic pricing plan, your application can send a certain allowance of data each month free of charge. The free allowance is large enough to cover development and publishing an app for a few users. You can set a cap to prevent more than a specified amount of data from being processed.

Larger volumes of telemetry are charged by the gigabyte. We provide some tips on how to limit your charges.

The Enterprise plan incurs a charge for each day that each web server node sends telemetry. It's suitable if you want to use Continuous Export on a large scale.

Read the pricing plan.

How much does it cost?

  • Open the Usage and estimated costs page in an Application Insights resource. There's a chart of recent usage. You can set a data volume cap, if you want.
  • To see your bills across all resources:
    1. Open the Azure portal.
    2. Search for Cost Management and use the Cost analysis pane to see forecasted costs.
    3. Search for Cost Management and Billing and open the Billing scopes pane to see current charges across subscriptions.

What does Application Insights modify in my project?

The details depend on the type of project. For a web application:

  • Adds these files to your project:
    • ApplicationInsights.config
    • ai.js
  • Installs these NuGet packages:
    • Application Insights API: The core API
    • Application Insights API for Web Applications: Used to send telemetry from the server
    • Application Insights API for JavaScript Applications: Used to send telemetry from the client
  • The packages include these assemblies:
    • Microsoft.ApplicationInsights
    • Microsoft.ApplicationInsights.Platform
  • Inserts items into:
    • Web.config
    • packages.config
  • (For new projects only, you add Application Insights to an existing project manually.) Inserts snippets into the client and server code to initialize them with the Application Insights resource ID. For example, in an MVC app, code is inserted into the main page Views/Shared/_Layout.cshtml.

How do I upgrade from older SDK versions?

See the release notes for the SDK appropriate to your type of application.

How can I change which Azure resource my project sends data to?

In Solution Explorer, right-click ApplicationInsights.config and select Update Application Insights. You can send the data to an existing or new resource in Azure. The update wizard changes the instrumentation key in ApplicationInsights.config, which determines where the server SDK sends your data. Unless you clear Update all, it also changes the key where it appears in your webpages.

Do new Azure regions require the use of connection strings?

New Azure regions require the use of connection strings instead of instrumentation keys. Connection string identifies the resource that you want to associate with your telemetry data. It also allows you to modify the endpoints your resource uses as a destination for your telemetry. Copy the connection string and add it to your application's code or to an environment variable.

Should I use connection strings or instrumentation keys?

We recommend that you use connection strings instead of instrumentation keys.

Can I use providers('Microsoft.Insights', 'components').apiVersions[0] in my Azure Resource Manager deployments?

We don't recommend using this method of populating the API version. The newest version can represent preview releases, which might contain breaking changes. Even with newer nonpreview releases, the API versions aren't always backward compatible with existing templates. In some cases, the API version might not be available to all subscriptions.

What telemetry does Application Insights collect?

From server web apps:

From client webpages:

From other sources, if you configure them:

Can I filter out or modify some telemetry?

Yes. In the server, you can write:

  • Telemetry Processor to filter or add properties to selected telemetry items before they're sent from your app.
  • Telemetry Initializer to add properties to all items of telemetry.

Learn more about ASP.NET, ASP.NET Core, JavaScript (Web), Python or Java.

How is city, country/region, and other geolocation data calculated?

We look up the IP address (IPv4 or IPv6) of the web client:

  • Browser telemetry: We collect the sender's IP address.
  • Server telemetry: The Application Insights module collects the client IP address. It's not collected if X-Forwarded-For is set.
  • To learn more about how IP address and geolocation data is collected in Application Insights, see Geolocation and IP address handling.

You can configure ClientIpHeaderTelemetryInitializer to take the IP address from a different header. In some systems, for example, it's moved by a proxy, load balancer, or CDN to X-Originating-IP. Learn more.

You can use Power BI to display your request telemetry on a map if you've migrated to a workspace-based resource.

How long is data retained in the portal? Is it secure?

What happens to Application Insight telemetry when a server or device loses connection with Azure?

All of our SDKs, including the web SDK, include reliable transport or robust transport. When the server or device loses connection with Azure, telemetry is stored locally on the file system (Server SDKs) or in HTML5 Session Storage (Web SDK). The SDK periodically retries to send this telemetry until our ingestion service considers it "stale" (48 hours for logs, 30 minutes for metrics). Stale telemetry is dropped. In some cases, such as when local storage is full, retry won't occur.

Is personal data sent in the telemetry?

You can send personal data if your code sends such data. It can also happen if variables in stack traces include personal data. Your development team should conduct risk assessments to ensure that personal data is properly handled. Learn more about data retention and privacy.

All octets of the client web address are always set to 0 after the geolocation attributes are looked up.

The Application Insights JavaScript SDK doesn't include any personal data in its autocompletion, by default. However, some personal data used in your application might be picked up by the SDK (for example, full names in window.title or account IDs in XHR URL query parameters). For custom personal data masking, add a telemetry initializer.

Why is my instrumentation key visible in my webpage source?

  • This visibility is common practice in monitoring solutions.
  • It can't be used to steal your data.
  • It could be used to skew your data or trigger alerts.
  • We haven't heard that any customer has had such problems.

You can:

  • Use two separate Instrumentation Keys (separate Application Insights resources), for client and server data.
  • Write a proxy that runs in your server and have the web client send data through that proxy.

How do I see POST data in Diagnostic Search?

We don't log POST data automatically, but you can use a TrackTrace call. To do so, put the data in the message parameter. Message parameters have a longer size limit than the limits on string properties, although you can't filter on it.

Should I use single or multiple Application Insights resources?

Use a single resource for all the components or roles in a single business system. Use separate resources for development, test, and release versions, and for independent applications.

How do I dynamically change the instrumentation key?

What are the user and session counts?

  • The JavaScript SDK sets a user cookie on the web client, to identify returning users, and a session cookie to group activities.
  • If there's no client-side script, you can set cookies at the server.
  • If one real user uses your site in different browsers, or by using in-private/incognito browsing, or different machines, they're counted more than once.
  • To identify a signed-in user across machines and browsers, add a call to setAuthenticatedUserContext().

How does Application Insights generate device information like browser, OS, language, and model?

The browser passes the User Agent string in the HTTP header of the request. The Application Insights ingestion service uses UA Parser to generate the fields you see in the data tables and experiences. As a result, Application Insights users are unable to change these fields.

Occasionally, this data might be missing or inaccurate if the user or enterprise disables sending User Agent in browser settings. The UA Parser regexes might not include all device information. Or Application Insights might not have adopted the latest updates.

How would I measure the impact of a monitoring campaign?

PageView Telemetry includes URL and you could parse the UTM parameter using a regex function in Kusto.

Occasionally, this data might be missing or inaccurate if the user or enterprise disables sending User Agent in browser settings. The UA Parser regexes might not include all device information. Or Application Insights might not have adopted the latest updates.

Have I enabled everything in Application Insights?

What you should see How to get it Why you want it
Availability charts Web tests. Know your web app is up.
Server app perf: response times, ... Add Application Insights to your project or install Azure Monitor Application Insights Agent on server (or write your own code to track dependencies). Detect perf issues.
Dependency telemetry Install Azure Monitor Application Insights Agent on server. Diagnose issues with databases or other external components.
Get stack traces from exceptions Insert TrackException calls in your code (but some are reported automatically). Detect and diagnose exceptions.
Search log traces Add a logging adapter. Diagnose exceptions, perf issues.
Client usage basics: page views, sessions, ... JavaScript initializer in webpages. Usage analytics.
Client custom metrics Tracking calls in webpages. Enhance user experience.
Server custom metrics Tracking calls in server. Business intelligence.

Why are the counts in Search and Metrics charts unequal?

Sampling reduces the number of telemetry items (like requests and custom events) that are sent from your app to the portal. In Search, you see the number of items received. In metric charts that display a count of events, you see the number of original events that occurred.

Each item that's transmitted carries an itemCount property that shows how many original events that item represents. To observe sampling in operation, you can run this query in Log Analytics:

    requests | summarize original_events = sum(itemCount), transmitted_events = count()

How do I move an Application Insights resource to a new region?

Moving existing Application Insights resources from one region to another is currently not supported. Historical data that you've collected can't be migrated to a new region. The only partial workaround is to:

  1. Create a new Application Insights resource (classic or workspace based) in the new region.
  2. Re-create all unique customizations specific to the original resource in the new resource.
  3. Modify your application to use the new region resource's instrumentation key or connection string.
  4. Test to confirm that everything is continuing to work as expected with your new Application Insights resource.
  5. At this point, you can either keep or delete the original Application Insights resource. If you delete a classic Application Insights resource, all historical data is lost. If the original resource was workspace based, its data remains in Log Analytics. Keeping the original Application Insights resource allows you to access its historical data until its data retention settings run out.

Unique customizations that commonly need to be manually re-created or updated for the resource in the new region include but aren't limited to:

  • Re-create custom dashboards and workbooks.
  • Re-create or update the scope of any custom log/metric alerts.
  • Re-create availability alerts.
  • Re-create any custom Azure role-based access control settings that are required for your users to access the new resource.
  • Replicate settings involving ingestion sampling, data retention, daily cap, and custom metrics enablement. These settings are controlled via the Usage and estimated costs pane.
  • Any integration that relies on API keys, such as release annotations and live metrics secure control channel. You need to generate new API keys and update the associated integration.
  • Continuous export in classic resources must be configured again.
  • Diagnostic settings in workspace-based resources must be configured again.

Note

If the resource you're creating in a new region is replacing a classic resource, we recommend that you explore the benefits of creating a new workspace-based resource. Alternatively, migrate your existing resource to workspace based.

Automation

Configure Application Insights

You can write PowerShell scripts by using Azure Resource Monitor to:

  • Create and update Application Insights resources.
  • Set the pricing plan.
  • Get the instrumentation key.
  • Add a metric alert.
  • Add an availability test.

You can't set up a metrics explorer report or set up continuous export.

Query the telemetry

Use the REST API to run Log Analytics queries.

How can I set an alert on an event?

Azure alerts are only on metrics. Create a custom metric that crosses a value threshold whenever your event occurs. Then set an alert on the metric. You get a notification whenever the metric crosses the threshold in either direction. You won't get a notification until the first crossing, no matter whether the initial value is high or low. There's always a latency of a few minutes.

Are there data transfer charges between an Azure web app and Application Insights?

  • If your Azure web app is hosted in a datacenter where there's an Application Insights collection endpoint, there's no charge.
  • If there's no collection endpoint in your host datacenter, your app's telemetry incurs Azure outgoing charges.

This answer depends on the distribution of our endpoints, not on where your Application Insights resource is hosted.

Can I send telemetry to the Application Insights portal?

We recommend that you use our SDKs and use the SDK API. There are variants of the SDK for various platforms. These SDKs handle processes like buffering, compression, throttling, and retries. However, the ingestion schema and endpoint protocol are public.

Can I monitor an intranet web server?

Yes, but you need to allow traffic to our services by either firewall exceptions or proxy redirects:

  • QuickPulse https://rt.services.visualstudio.com:443
  • ApplicationIdProvider https://dc.services.visualstudio.com:443
  • TelemetryChannel https://dc.services.visualstudio.com:443

See IP addresses used by Azure Monitor to review our full list of services and IP addresses.

Firewall exception

Allow your web server to send telemetry to our endpoints.

Gateway redirect

Route traffic from your server to a gateway on your intranet by overwriting endpoints in your configuration. If the Endpoint properties aren't present in your config, these classes use the default values shown in the following ApplicationInsights.config example.

Your gateway should route traffic to our endpoint's base address. In your configuration, replace the default values with http://<your.gateway.address>/<relative path>.

Example ApplicationInsights.config with default endpoints:

<ApplicationInsights>
  ...
  <TelemetryModules>
    <Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse.QuickPulseTelemetryModule, Microsoft.AI.PerfCounterCollector">
      <QuickPulseServiceEndpoint>https://rt.services.visualstudio.com/QuickPulseService.svc</QuickPulseServiceEndpoint>
    </Add>
  </TelemetryModules>
    ...
  <TelemetryChannel>
    <EndpointAddress>https://dc.services.visualstudio.com/v2/track</EndpointAddress>
  </TelemetryChannel>
  ...
  <ApplicationIdProvider Type="Microsoft.ApplicationInsights.Extensibility.Implementation.ApplicationId.ApplicationInsightsApplicationIdProvider, Microsoft.ApplicationInsights">
    <ProfileQueryEndpoint>https://dc.services.visualstudio.com/api/profiles/{0}/appId</ProfileQueryEndpoint>
  </ApplicationIdProvider>
  ...
</ApplicationInsights>

Note

ApplicationIdProvider is available starting in v2.6.0.

Proxy passthrough

To achieve proxy passthrough, configure a machine-level proxy or an application-level proxy. See DefaultProxy.

Example Web.config:

<system.net>
    <defaultProxy>
      <proxy proxyaddress="http://xx.xx.xx.xx:yyyy" bypassonlocal="true"/>
    </defaultProxy>
</system.net>

Can I run Availability web tests on an intranet server?

Our web tests run on points of presence that are distributed around the globe. There are two solutions:

  • Firewall door: Allow requests to your server from the long and changeable list of web test agents.
  • Custom code: Write your own code to send periodic requests to your server from inside your intranet. You could run Visual Studio web tests for this purpose. The tester could send the results to Application Insights by using the TrackAvailability() API.

How long does it take for telemetry to be collected?

Most Application Insights data has a latency of under 5 minutes. Some data can take longer, which is typical for larger log files. See the Application Insights service-level agreement.

Are the HTTP 502 and 503 responses always captured by Application Insights?

No. The "502 bad gateway" and "503 service unavailable" errors aren't always captured by Application Insights. If only client-side JavaScript is being used for monitoring, this behavior would be expected because the error response is returned prior to the page containing the HTML header with the monitoring JavaScript snippet being rendered.

If the 502 or 503 response was sent from a server with server-side monitoring enabled, the errors are collected by the Application Insights SDK.

Even when server-side monitoring is enabled on an application's web server, sometimes a 502 or 503 error isn't captured by Application Insights. Many modern web servers don't allow a client to communicate directly. Instead, they employ solutions like reverse proxies to pass information back and forth between the client and the front-end web servers.

In this scenario, a 502 or 503 response might be returned to a client because of an issue at the reverse proxy layer, so it isn't captured out-of-box by Application Insights. To help detect issues at this layer, you might need to forward logs from your reverse proxy to Log Analytics and create a custom rule to check for 502 or 503 responses. To learn more about common causes of 502 and 503 errors, see Troubleshoot HTTP errors of "502 bad gateway" and "503 service unavailable" in Azure App Service.

OpenTelemetry

What is OpenTelemetry?

It's a new open-source standard for observability. Learn more at OpenTelemetry.

Why is Microsoft Azure Monitor investing in OpenTelemetry?

Microsoft is among the largest contributors to OpenTelemetry.

The key value propositions of OpenTelemetry are that it's vendor-neutral and provides consistent APIs/SDKs across languages.

Over time, we believe OpenTelemetry will enable Azure Monitor customers to observe applications written in languages beyond our supported languages. It also expands the types of data you can collect through a rich set of instrumentation libraries. Furthermore, OpenTelemetry SDKs tend to be more performant at scale than their predecessors, the Application Insights SDKs.

Finally, OpenTelemetry aligns with Microsoft's strategy to embrace open source.

What's the status of OpenTelemetry?

What is the "Azure Monitor OpenTelemetry Distro"?

You can think of it as a thin wrapper that bundles together all the OpenTelemetry components for a first class experience on Azure.

Why should I use the "Azure Monitor OpenTelemetry Distro"?

There are several advantages to using the Azure Monitor OpenTelemetry Distro over native OpenTelemetry from the community:

  • Reduces enablement effort
  • Brings in Azure Specific features such as:
    • Azure Active Directory (Azure AD) Authentication
    • Offline Storage and Automatic Retries
    • Preserves traces with service components using Application Insights SDKs
    • Live Metrics (future)

In the spirit of OpenTelemetry, we've designed the distro to be open and extensible. For example, you can add:

  • An OTLP exporter and send to a second destination simultaneously
  • Community instrumentation libraries beyond what's bundled in with the package

How can I test out the Azure Monitor OpenTelemetry Distro?

Check out our enablement docs for .NET, Java, JavaScript (Node.js), and Python.

Should I use OpenTelemetry or the Application Insights SDK?

It depends. Consider that the Azure Monitor OpenTelemetry Distro is still "Preview", and it's not quite at feature parity with the Application Insights SDKs.

What's the current release state of features within the Azure Monitor OpenTelemetry Distro?

The following chart breaks out OpenTelemetry feature support for each language.

Feature .NET Node.js Python Java
Distributed tracing ⚠️ ⚠️ ⚠️
Custom metrics ⚠️ ⚠️ ⚠️
Standard metrics (accuracy currently affected by sampling) ⚠️ ⚠️ ⚠️
Fixed-rate sampling ⚠️ ⚠️ ⚠️
Offline storage and automatic retries ⚠️ ⚠️ ⚠️
Exception reporting ⚠️ ⚠️ ⚠️
Logs collection ⚠️ ⚠️ ⚠️
Azure Active Directory authentication ⚠️ ⚠️ ⚠️
Autopopulate cloud role name/role instance on Azure ⚠️ ⚠️
Live metrics
Autopopulation of user ID, authenticated user ID, and user IP
Manually override/set operation name, user ID, or authenticated user ID
Adaptive sampling
Profiler ⚠️
Snapshot Debugger

Key

Can OpenTelemetry be used for web browsers?

Yes, but we don't recommend it and Azure doesn't support it. OpenTelemetry JavaScript is heavily optimized for Node.js. Instead, we recommend using the Application Insights JavaScript SDK.

When can we expect the OpenTelemetry SDK to be available for use in web browsers?

The availability timeline for the OpenTelemetry web SDK hasn't been determined yet. We're likely several years away from a browser SDK that will be a viable alternative to the Application Insights JavaScript SDK.

Can I test OpenTelemetry in a web browser today?

The OpenTelemetry web sandbox is a fork designed to make OpenTelemetry work in a browser. It's not yet possible to send telemetry to Application Insights. The SDK doesn't currently have defined general client events.

Is running Application Insights alongside competitor agents like AppDynamics, DataDog, and NewRelic supported?

No. This practice isn't something we plan to test or support, although our Distros allow you to export to an OTLP endpoint alongside Azure Monitor simultaneously.

Can I use preview builds in production environments?

Can I use the Azure Monitor Exporter as a standalone component?

Yes, we understand some customers may want to instrument using a "piecemeal approach". However, the distro provides easiest way to get started with the best experience on Azure.

What's the difference between manual and auto-instrumentation?

Can I use the OpenTelemetry Collector?

Some customers have begun to use the OpenTelemetry Collector as an agent alternative, even though Microsoft doesn't officially support an agent-based approach for application monitoring yet. In the meantime, the open-source community has contributed an OpenTelemetry Collector Azure Monitor Exporter that some customers are using to send data to Azure Monitor Application Insights.

We plan to support an agent-based approach in the future, but the details and timeline aren't available yet. Our objective is to provide a path for any OpenTelemetry-supported language to send to Azure Monitor via the OpenTelemetry Protocol (OTLP). This approach enables customers to observe applications written in languages beyond our supported languages.

What's the difference between OpenCensus and OpenTelemetry?

OpenCensus is the precursor to OpenTelemetry. Microsoft helped bring together OpenTracing and OpenCensus to create OpenTelemetry, a single observability standard for the world. The current production-recommended Python SDK for Azure Monitor is based on OpenCensus. Eventually, all Azure Monitor SDKs will be based on OpenTelemetry.

Container insights

What does "Other processes" represent under the Node view?

Other processes are intended to help you clearly understand the root cause of the high resource usage on your node. This information helps you to distinguish usage between containerized processes versus noncontainerized processes.

What are these other processes?

They're noncontainerized processes that run on your node.

How do we calculate this?

Other processes = Total usage from CAdvisor - Usage from containerized process

The other processes include:

  • Self-managed or managed Kubernetes noncontainerized processes.
  • Container run-time processes.
  • Kubelet.
  • System processes running on your node.
  • Other non-Kubernetes workloads running on node hardware or a VM.

Why don't I see Image and Name property values populated when I query the ContainerLog table?

For agent version ciprod12042019 and later, by default these two properties aren't populated for every log line to minimize cost incurred on log data collected. There are two options to query the table that include these properties with their values:

Option 1

Join other tables to include these property values in the results.

Modify your queries to include Image and ImageTag properties from the ContainerInventory table by joining on ContainerID property. You can include the Name property (as it previously appeared in the ContainerLog table) from the KubepodInventory table's ContainerName field by joining on the ContainerID property. We recommend this option.

The following example is a sample detailed query that explains how to get these field values with joins.

//Let's say we're querying an hour's worth of logs
let startTime = ago(1h);
let endTime = now();
//Below gets the latest Image & ImageTag for every containerID, during the time window
let ContainerInv = ContainerInventory | where TimeGenerated >= startTime and TimeGenerated < endTime | summarize arg_max(TimeGenerated, *)  by ContainerID, Image, ImageTag | project-away TimeGenerated | project ContainerID1=ContainerID, Image1=Image ,ImageTag1=ImageTag;
//Below gets the latest Name for every containerID, during the time window
let KubePodInv  = KubePodInventory | where ContainerID != "" | where TimeGenerated >= startTime | where TimeGenerated < endTime | summarize arg_max(TimeGenerated, *)  by ContainerID2 = ContainerID, Name1=ContainerName | project ContainerID2 , Name1;
//Now join the above 2 to get a 'jointed table' that has name, image & imagetag. Outer left is safer in case there are no kubepod records or if they're latent
let ContainerData = ContainerInv | join kind=leftouter (KubePodInv) on $left.ContainerID1 == $right.ContainerID2;
//Now join ContainerLog table with the 'jointed table' above and project-away redundant fields/columns and rename columns that were rewritten
//Outer left is safer so you don't lose logs even if we can't find container metadata for loglines (due to latency, time skew between data types, etc.)
ContainerLog
| where TimeGenerated >= startTime and TimeGenerated < endTime
| join kind= leftouter (
  ContainerData
) on $left.ContainerID == $right.ContainerID2 | project-away ContainerID1, ContainerID2, Name, Image, ImageTag | project-rename Name = Name1, Image=Image1, ImageTag=ImageTag1

Option 2

Reenable collection for these properties for every container log line.

If the first option isn't convenient because of query changes involved, you can reenable collecting these fields. Enable the setting log_collection_settings.enrich_container_logs in the agent config map as described in the data collection configuration settings.

Note

We don't recommend the second option for large clusters that have more than 50 nodes. It generates API server calls from every node in the cluster to perform this enrichment. This option also increases data size for every log line collected.

Can I view metrics collected in Grafana?

Container insights support viewing metrics stored in your Log Analytics workspace in Grafana dashboards. We've provided a template that you can download from the Grafana dashboard repository. Use it to get started and as a reference to help you learn how to query data from your monitored clusters to visualize in custom Grafana dashboards.

Can I monitor my AKS-engine cluster with Container insights?

Container insights supports monitoring container workloads deployed to AKS-engine (formerly known as ACS-engine) clusters hosted on Azure. For more information and an overview of steps required to enable monitoring for this scenario, see Using Container insights for Azure Kubernetes Service-engine.

Why don't I see data in my Log Analytics workspace?

If you're unable to see any data in the Log Analytics workspace at a certain time every day, you might have reached the default 500-MB limit or the cap specified to control the amount of data to collect daily. When the limit is met for the day, data collection stops and resumes only on the next day. To review your data usage and update to a different pricing tier based on your anticipated usage patterns, see Azure Monitor Logs pricing information.

What are the container states specified in the ContainerInventory table?

The ContainerInventory table contains information about both stopped and running containers. The table is populated by a workflow inside the agent that queries the docker for all the running and stopped containers. The agent forwards that data to the Log Analytics workspace.

How do I resolve the "Missing Subscription registration" error?

If you receive the error "Missing Subscription registration for Microsoft.OperationsManagement," you can resolve it by registering the resource provider Microsoft.OperationsManagement in the subscription where the workspace is defined. For the steps, see Resolve errors for resource provider registration.

Is there support for Kubernetes RBAC-enabled Azure Kubernetes Service clusters?

The Container Monitoring solution doesn't support Kubernetes role-based access control (RBAC), but it's supported with Container insights. The solution details page might not show the right information in the panes that show data for these clusters.

How do I enable log collection for containers in the kube-system namespace through Helm?

The log collection from containers in the kube-system namespace is disabled by default. You can enable log collection by setting an environment variable on Azure Monitor Agent. See the Container insights GitHub page.

How do I update Azure Monitor Agent in Container insights to the latest released version?

To learn how to upgrade the agent, see Agent management.

Why are log lines larger than 16 KB split into multiple records in Log Analytics?

The agent uses the Docker JSON file logging driver to capture the stdout and stderr of containers. This logging driver splits log lines larger than 16 KB into multiple lines when they're copied from stdout or stderr to a file.

How do I enable multi-line logging?

Container insights now supports multi-line logging (preview), for more information on how to enable, checkout the Multi-line logging in Container Insights docs.

As an alternative you can also configure all the services to write in JSON format, and then Docker/Moby writes them as a single line. For example, you can wrap your log as a JSON object as shown in the following example for a sample Node.js application

console.log(json.stringify({ 
      "Hello": "This example has multiple lines:",
      "Docker/Moby": "will not break this into multiple lines",
      "and you'll receive":"all of them in log analytics",
      "as one": "log entry"
      }));

This data will look like the following example in Azure Monitor for logs when you query for it:

LogEntry : ({"Hello": "This example has multiple lines:","Docker/Moby": "will not break this into multiple lines", "and you'll receive":"all of them in log analytics", "as one": "log entry"}

For a detailed look at the issue, see this GitHub page.

How do I resolve Azure Active Directory errors when I enable live logs?

You might see the following error: "The reply url specified in the request doesn't match the reply urls configured for the application: '<application ID>'." For the solution, see View container data in real time with Container insights.

Why can't I upgrade a cluster after onboarding?

Here's the scenario: You enabled Container insights for an Azure Kubernetes Service cluster. Then you deleted the Log Analytics workspace where the cluster was sending its data. Now when you attempt to upgrade the cluster, it fails. To work around this issue, you must disable monitoring and then reenable it by referencing a different valid workspace in your subscription. When you try to perform the cluster upgrade again, it should process and complete successfully.

Which ports and domains do I need to open or allow for the agent?

See the Network firewall requirements for the proxy and firewall configuration information that's required for the containerized agent with Azure, Azure US Government, and Azure China 21Vianet clouds.

Is there support for collecting Kubernetes audit logs for ARO clusters?

No. Container insights don't support collection of Kubernetes audit logs.

Why don't I see Normal event types when I query the KubeEvents table?

By default, Normal event types aren't collected unless the collect_all_kube_events ConfigMap setting is enabled. If you need to collect Normal events, enable collect_all_kube_events setting in the container-azm-ms-agentconfig ConfigMap. See Configure agent data collection for Container insights for information on how to configure the ConfigMap.

VM insights

Can I onboard to an existing workspace?

If your VMs are already connected to a Log Analytics workspace, you might continue to use that workspace when you onboard to VM insights, provided it's in one of the supported regions.

Can I onboard to a new workspace?

If your VMs aren't currently connected to an existing Log Analytics workspace, you need to create a new workspace to store your data. Creating a new default workspace is done automatically if you configure a single Azure VM for VM insights through the Azure portal.

If you choose to use the script-based method, these steps are described in Enable VM insights by using Azure PowerShell or Azure Resource Manager templates.

What do I do if my VM is already reporting to an existing workspace?

If you're already collecting data from your VM, you might have already configured it to report data to an existing Log Analytics workspace. If that workspace is in one of our supported regions, you can enable VM insights to that preexisting workspace. If the workspace you're already using isn't in one of our supported regions, you can't onboard to VM insights at this time. We're working to support more regions.

Why did my VM fail to onboard?

The following steps occur when you onboard an Azure VM from the Azure portal:

  • A default Log Analytics workspace is created, if that option was selected.
  • The Log Analytics agent is installed on Azure VMs by using a VM extension, if it's required.
  • The VM insights Map Dependency agent is installed on Azure VMs by using an extension, if it's required.

During the onboarding process, we check for status on each of the preceding to return a notification status to you in the portal. Configuration of the workspace and the agent installation typically takes 5 to 10 minutes. Viewing monitoring data in the portal takes an extra 5 to 10 minutes.

If you've initiated onboarding and see messages indicating the VM needs to be onboarded, allow for up to 30 minutes for the VM to complete the process.

Why don't I see some or any data in the performance charts for my VM?

If you don't see performance data in the disk table or in some of the performance charts, your performance counters might not be configured in the workspace. To resolve this issue, run this PowerShell script.

How is the VM insights Map feature different from Service Map?

The VM insights Map feature is based on Service Map, but it has the following differences:

  • The Map view can be accessed from the VM pane and from VM insights under Azure Monitor.
  • The connections in the map are now clickable and display a view of the connection metric data in the side pane for the selected connection.
  • A new API creates the maps to better support more complex maps.
  • Monitored VMs are now included in the client group node. The donut chart shows the proportion of monitored versus unmonitored VMs in the group. It can also be used to filter the list of machines when the group is expanded.
  • Monitored VMs are now included in the server port group nodes. The donut chart shows the proportion of monitored versus unmonitored machines in the group. It can also be used to filter the list of machines when the group is expanded.
  • The map style was updated to be more consistent with App Map from Application insights.
  • The side panes have been updated and don't have the full set of integrations that were supported in Service Map: Update Management, Change Tracking, Security, and Service Desk.
  • The option for choosing groups and machines to map was updated and now supports subscriptions, resource groups, Azure Virtual Machine Scale Sets, and cloud services.
  • You can't create new Service Map machine groups in the VM insights Map feature.

Why do my performance charts show dotted lines?

Dotted lines can occur for a few reasons. In cases where there's a gap in data collection, the lines appear as dotted.

The default setting for data sampling frequency is to collect data every 60 seconds. If you modified the frequency for the performance counters enabled, dotted lines are used in the chart if you chose a narrow time range for the chart and your sampling frequency is less than the bucket size used in the chart.

For example, the sampling frequency is every 10 minutes, and each bucket on the chart is 5 minutes. Choosing a wider time range to view should cause the chart lines to appear as solid lines rather than dotted lines in this case.

Are groups supported with VM insights?

Yes. After you install the Dependency agent, we collect information from the VMs to display groups based on subscription, resource group, virtual machine scale sets, and cloud services.

If you've been using Service Map and have created machine groups, these groups are displayed too. Computer groups also appear in the groups filter if you created them for the workspace you're viewing.

How do I see the information for what's driving the 95th percentile line in the aggregate performance charts?

By default, the list is sorted to show you the VMs that have the highest value for the 95th percentile for the selected metric. The Available memory chart is an exception. It shows the machines with the lowest value of the 5th percentile. Selecting the chart opens the Top N List view with the appropriate metric selected.

How does the Map feature handle duplicate IPs across different virtual networks and subnets?

If you're duplicating IP ranges either with VMs or Azure Virtual Machine Scale Sets across subnets and virtual networks, VM insights Map might display incorrect information. This issue is known. We're investigating options to improve this experience.

Does the Map feature support IPv6?

The Map feature currently only supports IPv4. We're investigating support for IPv6. We also support IPv4 that's tunneled inside IPv6.

When I load a map for a resource group or other large group, why is the map difficult to view?

Although we've made improvements to Map to handle large and complex configurations, we realize a map can have many nodes, connections, and nodes working as a cluster. We're committed to continuing to enhance support to increase scalability.

Why does the network chart on the Performance tab look different than the network chart on the Azure VM overview page?

The overview page for an Azure VM displays charts based on the host's measurement of activity in the guest VM. The network chart on the Azure VM overview only displays network traffic that will be billed. Inter-virtual network traffic isn't included. The data and charts shown for VM insights are based on data from the guest VM. The network chart displays all TCP/IP traffic that's inbound and outbound to that VM, including inter-virtual network traffic.

How is response time measured for data stored in VMConnection and displayed in the connection panel and workbooks?

Response time is an approximation. Because we don't instrument the code of the application, we don't know when a request begins and when the response arrives. Instead, we observe data being sent on a connection and coming back on that connection.

Our agent keeps track of the sends and receives and attempts to pair them. A sequence of sends, followed by a sequence of receives, is interpreted as a request/response pair. The timing between these operations is the response time. It includes the network latency and the server processing time.

This approximation works well for protocols that are request/response based. A single request goes out on the connection, and a single response arrives. This behavior is the case for HTTP(S) (without pipelining), but it's not satisfied for other protocols.

Are there limitations if I'm on the Log Analytics Free pricing plan?

If you've configured Azure Monitor with a Log Analytics workspace by using the Free pricing tier, the VM insights Map feature supports only five connected machines that are connected to the workspace. If you have five VMs connected to a free workspace and you disconnect one of the VMs and later connect a new VM, the new VM isn't monitored and reflected on the Map page.

Under this condition, you're prompted with the Try Now option when you open the VM and select Insights from the pane on the left, even after it has already been installed on the VM. However, you're not prompted with options as would normally occur if this VM weren't onboarded to VM insights.

SQL Insights (preview)

What versions of SQL Server are supported?

We support SQL Server 2012 and all newer versions. See Supported versions.

What SQL resource types are supported?

  • Azure SQL Database
  • Azure SQL Managed Instance
  • SQL Server on Azure Virtual Machines (SQL Server running on virtual machines registered with the SQL virtual machine provider)
  • Azure Virtual Machines (SQL Server running on virtual machines not registered with the SQL virtual machine provider)

See Supported versions for more information and for details about scenarios with no support or limited support.

What operating systems for the virtual machine running SQL Server are supported?

We support all operating systems specified by the Windows and Linux documentation for SQL Server on Azure Virtual Machines.

What operating systems for the monitoring virtual machine are supported?

Ubuntu 18.04 is currently the only operating system supported for the monitoring virtual machine.

Where is the monitoring data stored in Log Analytics?

All the monitoring data is stored in the InsightsMetrics table. The Origin column has the value solutions.azm.ms/telegraf/SqlInsights. The Namespace column has values that start with sqlserver_.

How often is data collected?

The frequency of data collection is customizable. See Data collected by SQL Insights (preview) for information on the default frequencies. See Create SQL monitoring profile for instructions on customizing frequencies.

Next steps

If your question isn't answered here, see more questions and answers at the following forums:

For general feedback on Azure Monitor, see the feedback forum.