This Microsoft FAQ is a list of commonly asked questions about Azure Monitor. If you have any other questions, go to the discussion forum and post your questions. When a question is frequently asked, we add it to this article so that it can be found quickly and easily.
General
What is Azure Monitor?
Azure Monitor is a service in Azure that provides performance and availability monitoring for applications and services in Azure, other cloud environments, or on-premises. Azure Monitor collects data from multiple sources into a common data platform where it can be analyzed for trends and anomalies. Rich features in Azure Monitor assist you in quickly identifying and responding to critical situations that may affect your application.
What's the difference between Azure Monitor, Log Analytics, and Application Insights?
In September 2018, Microsoft combined Azure Monitor, Log Analytics, and Application Insights into a single service to provide powerful end-to-end monitoring of your applications and the components they rely on. Features in Log Analytics and Application Insights haven't changed, although some features have been rebranded to Azure Monitor in order to better reflect their new scope. The log data engine and query language of Log Analytics is now referred to as Azure Monitor Logs. See Azure Monitor terminology updates.
What does Azure Monitor cost?
Features of Azure Monitor that are automatically enabled such as collection of metrics and activity logs are provided at no cost. There's a cost associated with other features such as log queries and alerting. See the Azure Monitor pricing page for detailed pricing information.
How do I enable Azure Monitor?
Azure Monitor is enabled the moment that you create a new Azure subscription, and Activity log and platform metrics are automatically collected. Create diagnostic settings to collect more detailed information about the operation of your Azure resources, and add monitoring solutions and insights to provide extra analysis on collected data for particular services.
How do I access Azure Monitor?
Access all Azure Monitor features and data from the Monitor menu in the Azure portal. The Monitoring section of the menu for different Azure services provides access to the same tools with data filtered to a particular resource. Azure Monitor data is also accessible for various scenarios using CLI, PowerShell, and a REST API.
Is there an on-premises version of Azure Monitor?
No. Azure Monitor is a scalable cloud service that processes and stores large amounts of data, although Azure Monitor can monitor resources that are on-premises and in other clouds.
Can Azure Monitor monitor on-premises resources?
Yes, in addition to collecting monitoring data from Azure resources, Azure Monitor can collect data from virtual machines and applications in other clouds and on-premises. See Sources of monitoring data for Azure Monitor.
Does Azure Monitor integrate with System Center Operations Manager?
You can connect your existing System Center Operations Manager management group to Azure Monitor to collect data from agents into Azure Monitor Logs. This capability allows you to use log queries and solution to analyze data collected from agents. You can also configure existing System Center Operations Manager agents to send data directly to Azure Monitor. See Connect Operations Manager to Azure Monitor.
What IP addresses does Azure Monitor use?
See IP addresses used by Application Insights and Log Analytics for a listing of the IP addresses and ports required for agents and other external resources to access Azure Monitor.
Monitoring data
Where does Azure Monitor get its data?
Azure Monitor collects data from various sources including logs and metrics from Azure platform and resources, custom applications, and agents running on virtual machines. Other services such as Microsoft Defender for Cloud and Network Watcher collect data into a Log Analytics workspace so it can be analyzed with Azure Monitor data. You can also send custom data to Azure Monitor using the REST API for logs or metrics. See Sources of monitoring data for Azure Monitor.
What data is collected by Azure Monitor?
Azure Monitor collects data from various sources into logs or metrics. Each type of data has its own relative advantages, and each supports a particular set of features in Azure Monitor. There's a single metrics database for each Azure subscription, while you can create multiple Log Analytics workspaces to collect logs depending on your requirements. See Azure Monitor data platform.
Is there a maximum amount of data that I can collect in Azure Monitor?
There's no limit to the amount of metric data you can collect, but this data is stored for a maximum of 93 days. See Retention of Metrics. There's no limit on the amount of log data that you can collect, but it may be affected by the pricing tier you choose for the Log Analytics workspace. See pricing details.
How do I access data collected by Azure Monitor?
Insights and solutions provide a custom experience for working with data stored in Azure Monitor. You can work directly with log data using a log query written in Kusto Query Language (KQL). In the Azure portal, you can write and run queries and interactively analyze data using Log Analytics. Analyze metrics in the Azure portal with the Metrics Explorer. See Analyze log data in Azure Monitor and Getting started with Azure Metrics Explorer.
Why am I seeing duplicate records in Azure Monitor Logs?
You may on occasion notice duplicate records in Azure Monitor Logs. This duplication is typically from one of the following two conditions.
- Components in the pipeline have retries to ensure reliable delivery at the destination. Occasionally, this capability may result in duplicates for a small percentage of telemetry items.
- If the duplicate records come from a virtual machine, then you may have both the Log Analytics agent and Azure Monitor agent installed. If you still need the Log Analytics agent installed, then configure the Log Analytics workspace to no longer collect data that’s also being collected by the data collection rule used by Azure Monitor agent.
Solutions and insights
What is an insight in Azure Monitor?
Insights provide a customized monitoring experience for particular Azure services. They use the same metrics and logs as other features in Azure Monitor but may collect extra data and provide a unique experience in the Azure portal. See Insights in Azure Monitor.
To view insights in the Azure portal, see the Insights section of the Monitor menu or the Monitoring section of the service's menu.
What is a solution in Azure Monitor?
Monitoring solutions are packaged sets of logic for monitoring a particular application or service based on Azure Monitor features. They collect log data in Azure Monitor and provide log queries and views for their analysis using a common experience in the Azure portal. See Monitoring solutions in Azure Monitor.
To view solutions in the Azure portal, click More in the Insights section of the Monitor menu. Click Add to add more solutions to the workspace.
Logs
What's the difference between Azure Monitor Logs and Azure Data Explorer?
Azure Data Explorer is a fast and highly scalable data exploration service for log and telemetry data. Azure Monitor Logs is built on top of Azure Data Explorer and uses the same Kusto Query Language (KQL) with some minor differences. See Azure Monitor log query language differences.
How do I retrieve log data?
All data is retrieved from a Log Analytics workspace using a log query written using Kusto Query Language (KQL). You can write your own queries or use solutions and insights that include log queries for a particular application or service. See Overview of log queries in Azure Monitor.
Can I delete data from a Log Analytics workspace?
Data is removed from a workspace according to its retention period. You can delete specific data for privacy or compliance reasons. See How to export and delete private data for more information.
Is Log Analytics storage immutable?
Data in database storage can't be altered once ingested but can be deleted via purge API path for deleting private data. Although data can't be altered, some certifications require that data is kept immutable, and can't be changed or deleted in storage. Data immutability can be achieved using data export to a storage account that is configured as immutable storage.
What is a Log Analytics workspace?
All log data collected by Azure Monitor is stored in a Log Analytics workspace. A workspace is essentially a container where log data is collected from various sources. You may have a single Log Analytics workspace for all your monitoring data or may have requirements for multiple workspaces. See Design a Log Analytics workspace configuration(logs/workspace-design.md).
Can you move an existing Log Analytics workspace to another Azure subscription?
You can move a workspace between resource groups or subscriptions but not to a different region. See Move a Log Analytics workspace to different subscription or resource group.
Why can't I see Query Explorer and Save buttons in Log Analytics?
Query Explorer, Save and New alert rule buttons aren't available when the query scope is set to a specific resource. To create alerts, save or load a query, Log Analytics must be scoped to a workspace. To open Log Analytics in workspace context, select Logs from the Azure Monitor menu. The last used workspace is selected, but you can select any other workspace. See Log query scope and time range in Azure Monitor Log Analytics
Why am I getting the error "Register resource provider `Microsoft.Insights` for this subscription to enable this query" when opening Log Analytics from a VM?
Many resource providers are automatically registered, but you may need to manually register some resource providers. The scope for registration is always the subscription. See Resource providers and types for more information.
Why am I getting no access error message when opening Log Analytics from a VM?
To view VM Logs, you need to be granted with read permission to the workspaces that stores the VM logs. In these cases, your administrator must grant you with to permissions in Azure.
Why can't I create AzureDiagnostics table via template, or modify schema via API?
AzureDiagnostics is a unique table that gets created by Log Analytics service with data ingestion, and its schema can't be configured. Retention policy can be applied after the tables is generated.
Metrics
Why are metrics from the guest OS of my Azure virtual machine not showing up in Metrics explorer?
Platform metrics are collected automatically for Azure resources. You must perform some configuration though to collect metrics from the guest OS of a virtual machine. For a Windows VM, install the diagnostic extension and configure the Azure Monitor sink as described in Install and configure Microsoft Azure diagnostics extension (WAD). For Linux, install the Telegraf agent as described in Collect custom metrics for a Linux VM with the InfluxData Telegraf agent.
Prometheus
What is an Azure Monitor workspace?
Prometheus metrics data collected by the Azure Monitor managed service for Prometheus is stored in an Azure Monitor workspace. It is essentially a container where Prometheus metrics data is stored for a variety of sources. You may have a single Azure Monitor workspace for all your Prometheus metrics data or may have requirements for multiple workspaces. See Azure Monitor workspace overview for additional information.
What is the difference between an Azure Monitor workspace and a Log Analytics workspace?
An Azure Monitor workspace is a unique environment for data collected by Azure Monitor. Each workspace has its own data repository, configuration, and permissions. Azure Monitor workspaces will eventually contain all metric data collected by Azure Monitor, including native metrics. Currently, the only data hosted by an Azure Monitor workspace is Prometheus metrics. See Azure Monitor workspace overview for additional information.
How do I retrieve Prometheus metrics data?
All data is retrieved from an Azure Monitor workspace using queries written in Prometheus Query Language (PromQL). You can write your own queries or use open source queries and Grafana dashboards that include PromQL queries. See the Prometheus project for more details.
Can I delete Prometheus metrics data from an Azure Monitor workspace?
Data is removed from the Azure Monitor workspace according to it data retention period, which is 18 months.
Can I view my Prometheus metrics in Azure Monitor Metrics explorer?
Metrics explorer in Azure Monitor does not currently support visualizing Prometheus metric data. Please look at using Azure Managed Grafana to visualize your metrics in Azure Monitor managed service for Prometheus.
Can I use Azure Managed Grafana in a different region than my Azure Monitor workspace and Managed Prometheus?
Yes, when using Azure Monitor managed service for Prometheus you can create your Azure Monitor workspace in any of the supported regions. Your Azure Kubernetes Service clusters can also be in any region and send data into your Managed Prometheus in a different region. Azure Managed Grafana can also be in a different region than where you have created your Azure Monitor workspace.
When using Managed Prometheus can I store data for more than one cluster in an Azure Monitor workspace?
Yes, Azure Monitor managed service for Prometheus is intended to enable scenarios where you can store data from several Azure Kubernetes Service clusters in a single Azure Monitor workspace. See Azure Monitor workspace overview for additional information.
What types of resources can send Prometheus metrics to Managed Prometheus?
Our collector can be used on Azure Kubernetes Service clusters. It is installed as a managed add-on and can be configured to collect the data you want and to run as a replica set or as a replica set and on each node in the cluster. You can also configure remote write on Kubernetes clusters running in Azure, another cloud, or on-premises by following our instructions for enabling remote write.
Does enabling Managed Prometheus on my AKS cluster also enable Container Insights?
You have options for how you can collect your Prometheus metrics. If you use the Azure portal and enable Prometheus metrics collection and install the AKS add-on from the Azure Monitor workspace UX it won’t enable Container Insights and collection of Log data. When you go to the Insights page on your AKS cluster you will be prompted to enable Container Insights which will collect Log data.
If you use the Azure portal and enable Prometheus metrics collection and install the AKS add-on from the Insights page of your AKS cluster it will enable both Logs collection and Prometheus metrics collection into Managed Prometheus.
Change Analysis
Does using Change Analysis incur cost?
You can use Change Analysis at no extra cost. Enable Microsoft.ChangeAnalysis
resource provider and anything supported by Change Analysis will be open to you.
How can I enable for a web application?
Enable Change Analysis for web application in-guest changes using the Diagnose and solve problems tool.
Alerts
What is an alert in Azure Monitor?
Alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues before the users of your system notice them. There are multiple kinds of alerts:
- Metric - Metric value exceeds a threshold.
- Log query - Results of a log query match defined criteria.
- Activity log - Activity log event matches defined criteria.
- Web test - Results of availability test match defined criteria.
What is an action group?
An action group is a collection of notifications and actions that can be triggered by an alert. Multiple alerts can use a single action group allowing you to use common sets of notifications and actions. See Create and manage action groups in the Azure portal.
What is an action rule?
An action rule allows you to modify the behavior of a set of alerts that match a certain criteria. This rule allows you to perform such requirements as disable alert actions during a maintenance window. You can also apply an action group to a set of alerts rather than applying them directly to the alert rules. See Action rules.
Agents
Does Azure Monitor require an agent?
An agent is only required to collect data from the operating system and workloads in virtual machines. The virtual machines can be located in Azure, another cloud environment, or on-premises. See Overview of the Azure Monitor agents.
What's the difference between the Azure Monitor agents?
The Azure Monitor Agent is the new, improved agent that consolidates features from all the other legacy monitoring agents while providing extra benefits, like centralized data collection, filtering, multihoming, and more. See Overview of the Azure Monitor agents.
The legacy agents include:
- Azure Diagnostic extension is for Azure virtual machines and collects data to Azure Monitor Metrics, Azure Storage, and Azure Event Hubs.
- The Log Analytics agent is for virtual machines in Azure, another cloud environment, or on-premises and collects data to Azure Monitor Logs. These agents will be deprecated by August 2024.
Does my agent traffic use my ExpressRoute connection?
Traffic to Azure Monitor uses the Microsoft peering ExpressRoute circuit. See ExpressRoute documentation for a description of the different types of ExpressRoute traffic.
How can I confirm that the Log Analytics agent is able to communicate with Azure Monitor?
From Control Panel on the agent computer, select Security & Settings, Microsoft Monitoring Agent. Under the Azure Log Analytics (OMS) tab, a green check mark icon confirms that the agent is able to communicate with Azure Monitor. A yellow warning icon means the agent is having issues. One common reason is the Microsoft Monitoring Agent service has stopped. Use service control manager to restart the service.
How do I stop the Log Analytics agent from communicating with Azure Monitor?
For agents connected to Log Analytics directly, open the Control Panel and select Microsoft Monitoring Agent. Under the Azure Log Analytics (OMS) tab, remove all workspaces listed. In System Center Operations Manager, remove the computer from the Log Analytics managed computers list. Operations Manager updates the configuration of the agent to no longer report to Log Analytics.
How much data is sent per agent?
The amount of data sent per agent depends on:
- The solutions you have enabled
- The number of logs and performance counters being collected
- The volume of data in the logs
See Analyze usage in Log Analytics workspace for details.
For computers that are able to run the WireData agent, use the following query to see how much data is being sent:
WireData
| where ProcessName == "C:\\Program Files\\Microsoft Monitoring Agent\\Agent\\MonitoringHost.exe"
| where Direction == "Outbound"
| summarize sum(TotalBytes) by Computer
How much network bandwidth is used by the Microsoft Management Agent (MMA) when sending data to Azure Monitor?
Bandwidth is a function on the amount of data sent. Data is compressed as it's sent over the network.
How can I be notified when data collection from the Log Analytics agent stops?
Use the steps described in create a new log alert to be notified when data collection stops. Use the following settings for the alert rule:
- Define alert condition: Specify your Log Analytics workspace as the resource target.
- Alert criteria
- Signal Name: Custom log search
- Search query:
Heartbeat | summarize LastCall = max(TimeGenerated) by Computer | where LastCall < ago(15m)
- Alert logic: Based on number of results, Condition Greater than, Threshold value 0
- Evaluated based on: Period (in minutes) 30, Frequency (in minutes) 10
- Define alert details
- Name: Data collection stopped
- Severity: Warning
Specify an existing or new Action Group so that when the log alert matches criteria, you're notified if you have a heartbeat missing for more than 15 minutes.
What are the firewall requirements for Log Analytics agents?
See Network firewall requirements for details on firewall requirements.
Azure Monitor agent
Why should I use the AMA or migrate from Log Analytics agent (MMA) to AMA?
The AMA replaces the Log Analytics agent, the Azure Diagnostics extension, and the Telegraf Agent. The AMA offers a higher rate of EPS with a lower footprint, providing enhanced filtering features, scalable deployment management and configuration using DCRs and Azure policies.
While the AMA hasn't yet reached full parity with the MMA, we continue to add features and support and the MMA will be retired on August 31, 2024.
For more information, see the Azure Monitor Agent overview.
What is the upgrade path from Log Analytics agents to Azure Monitor Agent? How do we migrate?
What's the upgrade path from Log Analytics Agent (MMA) to Azure Monitor Agent (AMA) for monitoring System Center Operations Manager? Can we use AMA for System Center Operations Manager scenarios?
Here's how AMA impacts the two System Center Operations Manager related monitor scenarios:
- Scenario 1: Monitoring the Windows operating system of System Center Operations Manager. The upgrade path is same as any other machine, wherein you can migrate from MMA (versions 2016, 2019) to AMA as soon as your required parity features are available on AMA.
- Scenario 2: Onboarding/connecting System Center Operations Manager to Log Analytics workspaces. This is enabled via a System Center Operations Manager connector for Log Analytics/Azure Monitor, neither MMA nor AMA is required to be installed on the Operations Manager management server. As such, there's no impact to this use case from AMA perspective.
Will the new Azure Monitor agent support data collection for the various Log Analytics solutions and Azure services such as Microsoft Defender for Cloud and Microsoft Sentinel?
Review the list of AMA extensions currently available in preview. These are the same solutions and services now available using the new Azure Monitor agent instead. You may see more extensions getting installed for the solution/service to collect extra data or perform transformation/processing as required for the solution/service, and then use AMA to route the final data to Azure Monitor.
Here's a diagram explaining the new extensibility architecture:
Which Log Analytics solutions are supported on the new Azure Monitor Agent?
How can I collect Windows security events using the new Azure Monitor Agent?
There are two ways you can collect Security events using the new agent, when sending to a Log Analytics workspace:
- You can use AMA to natively collect Security Events, same as other Windows Events. These flow to the 'Event' table in your Log Analytics workspace.
- If you have Sentinel enabled on the workspace, the Security Events flow via AMA into the 'SecurityEvent' table instead (same as using Log Analytics Agent). This will always require the solution to be enabled first.
Can the Azure Monitor Agent and Log Analytics Agent co-exist side-by-side?
Yes they can, but with certain considerations. Read more about agent coexistence.
Will I duplicate events if I use the Azure Monitor agent and the Log Analytics agent on the same machine?
If you're collecting the same events with both agents, there will be duplication. This duplication could be the legacy agent collecting redundant data from the workspace configuration data, collected by the data collection rule. Or you might be collecting security events with legacy agent and enable Windows Security Events with the AMA connectors in Microsoft Sentinel.
You should limit duplication events to only while you transition from one agent to the other. After you've fully tested the DCR and verified its data collection, you should disconnect disable collection for the workspace and disconnect any MMA data connectors.
Is the Azure Monitor Agent at parity with the Log Analytics agents?
Review current limitations of AMA when compared with the Log Analytics agents.
Does the Azure Monitor Agent support non-Azure environments (other clouds, on-premises)?
Both on-premises machines and machines connected to other clouds are supported for servers today, once you have the Azure Arc agent installed. For purposes of running AMA and DCR, the Arc requirement comes at no extra cost or resource consumption, since the Arc agent is only used as an installation mechanism and you need not enable the paid management features if you don’t wish to use them.
Does the Azure Monitor Agent support private links?
Yes it does, via Data Collection Endpoints created and added to an Azure Monitor Private Link Scope (AMPLS). Walk through the setup steps.
Does AMA support AuditD logs on linux, or AUOMS?
Yes, but you need to onboard to Defender for Cloud (previously Azure Security Center) service available as an extension to AMA, which collects Linux auditd logs via AUOMS.
Is Azure Arc required for AAD-joined machines?
For AAD-joined (or hybrid AAD-joined) machines running Windows 10 or 11 (client OS), you do not require Arc to be installed on these machines. Instead, you can use the Windows MSI installer for AMA, currently available in preview).
Why do I need to install the Azure Arc Connected Machine agent to use AMA?
The AMA authenticates to your workspace via managed identity, which is created when you install the Connected Machine agent. Managed Identity is a more secure and manageable authentication solution from Azure. The legacy Log Analytics agent authenticates using the workspace ID and key instead, and therefore did not need Azure Arc.
What impact does installing the Azure Arc Connected Machine agent have on my non-Azure machine?
There's no impact to the machine once the Azure Arc is installed. It hardly uses system or network resources, and is designed to have a low footprint on the host where it’s run.
What types of machines do the new Azure Monitor Agent support?
You can directly install them on Virtual Machines, Virtual Machines Scale Sets, and Arc enabled Servers. You can also install them on devices (workstations, desktops) running Windows 10 or 11 using the Windows MSI installer for AMA, currently available in preview.
Can we filter events using event ID, that is, more granular event filtering using the new Azure Monitor Agent?
Yes. You can use Xpath queries for filtering Windows Event Logs. Learn more
For performance counters, you can specify specific counters you wish to collect, and exclude ones you don’t need.
For syslog on Linux, you can choose Facilities and log level for each facility to collect.
Does the new Azure Monitor agent support sending data to Event Hubs and Azure Storage Accounts?
Not yet, but the new agent along with Data Collection Rules will support sending data to both Event Hubs and Azure Storage accounts in the future when AMA starts convergence with the Diagnostic extensions.
Does the new Azure Monitor agent have hardening support for Linux?
Hardening support for Linux isn't available yet.
What roles do I need to create a DCR that collects events from my servers?
If I create DCRs that contain the same event ID and associate it to the same VM, will the events be duplicated?
Yes. To avoid duplication, please make sure the event selection you make in your Data Collection Rules doesn't contain duplicate events.
How can I validate my XPATH queries on the AMA?
Use the Get-WinEvent PowerShell cmdlet -FilterXPath parameter to test the validity of an XPath query. For more information, see the tip provided in the Windows agent-based connections instructions.
The Get-WinEvent PowerShell cmdlet supports up to 23 expressions, which Azure Monitor DCRs support up to 20. Also >
and <
characters must be encoded as >
and <
in your DCR.
Visualizations
Why can't I see View Designer?
View Designer is only available for users assigned with Contributor permissions or higher in the Log Analytics workspace.
Application Insights
Configuration problems
I'm having trouble setting up my:
I get no data from my server:
How many Application Insights resources should I deploy:
Can I use Application Insights with ...?
Is it free?
Yes, for experimental use. In the basic pricing plan, your application can send a certain allowance of data each month free of charge. The free allowance is large enough to cover development, and publishing an app for a few users. You can set a cap to prevent more than a specified amount of data from being processed.
Larger volumes of telemetry are charged by the Gb. We provide some tips on how to limit your charges.
The Enterprise plan incurs a charge for each day that each web server node sends telemetry. It's suitable if you want to use Continuous Export on a large scale.
How much does it cost?
- Open the Usage and estimated costs page in an Application Insights resource. There's a chart of recent usage. You can set a data volume cap, if you want.
- To see your bills across all resources:
- Open the Azure portal
- Search for "Cost Management" and use the Cost analysis pane to see forecasted costs.
- Search for "Cost Management and Billing" and open the Billing scopes pane to see current charges across subscriptions.
What do Application Insights modify in my project?
The details depend on the type of project. For a web application:
- Adds these files to your project:
- ApplicationInsights.config
- ai.js
- Installs these NuGet packages:
- Application Insights API - the core API
- Application Insights API for Web Applications - used to send telemetry from the server
- Application Insights API for JavaScript Applications - used to send telemetry from the client
- The packages include these assemblies:
- Microsoft.ApplicationInsights
- Microsoft.ApplicationInsights.Platform
- Inserts items into:
- Web.config
- packages.config
- (For new projects only, you'll add Application Insights to an existing project manually). Inserts snippets into the client and server code to initialize them with the Application Insights resource ID. For example, in an MVC app, code is inserted into the master page Views/Shared/_Layout.cshtml
How do I upgrade from older SDK versions?
See the release notes for the SDK appropriate to your type of application.
How can I change which Azure resource my project sends data to?
In Solution Explorer, right-click ApplicationInsights.config
and choose Update Application Insights. You can send the data to an existing or new resource in Azure. The update wizard changes the instrumentation key in ApplicationInsights.config, which determines where the server SDK sends your data. Unless you deselect "Update all," it will also change the key where it appears in your web pages.
Do new Azure regions require the use of connection strings?
New Azure regions require the use of connection strings instead of instrumentation keys. Connection string identifies the resource that you want to associate your telemetry data with. It also allows you to modify the endpoints your resource will use as a destination for your telemetry. You'll need to copy the connection string and add it to your application's code or to an environment variable.
Should I use connection strings or instrumentation keys?
Connection Strings are recommended over instrumentation keys.
Can I use `providers('Microsoft.Insights', 'components').apiVersions[0]` in my Azure Resource Manager deployments?
We don't recommend using this method of populating the API version. The newest version can represent preview releases, which may contain breaking changes. Even with newer non-preview releases, the API versions aren't always backwards compatible with existing templates, or in some cases the API version may not be available to all subscriptions.
What telemetry is collected by Application Insights?
From server web apps:
- HTTP requests
- Dependencies. Calls to: SQL Databases; HTTP calls to external services; Azure Cosmos DB, table, blob storage, and queue.
- Exceptions and stack traces.
- Performance Counters - If you use Azure Monitor Application Insights Agent, Azure monitoring for VM or Virtual Machine Scale Set, or the Application Insights collectd writer.
- Custom events and metrics that you code.
- Trace logs if you configure the appropriate collector.
From client web pages:
- Page view counts
- AJAX calls Requests made from a running script.
- Page view load data
- Configurable Page visit time
- User and session counts
- Authenticated user IDs
From other sources, if you configure them:
Can I filter out or modify some telemetry?
Yes, in the server you can write:
- Telemetry Processor to filter or add properties to selected telemetry items before they are sent from your app.
- Telemetry Initializer to add properties to all items of telemetry.
How are city, country/region, and other geo location data calculated?
We look up the IP address (IPv4 or IPv6) of the web client.
- Browser telemetry: We collect the sender's IP address.
- Server telemetry: The Application Insights module collects the client IP address. It's not collected if
X-Forwarded-For
is set. - To learn more about how IP address and geolocation data are collected in Application Insights refer to this article.
You can configure the ClientIpHeaderTelemetryInitializer
to take the IP address from a different header. In some systems, for example, it's moved by a proxy, load balancer, or CDN to X-Originating-IP
. Learn more.
You can use Power BI to display your request telemetry on a map if you've migrated to a workspace-based resource.
How long is data retained in the portal? Is it secure?
Take a look at Data Retention and Privacy.
What happens to Application Insight's telemetry when a server or device loses connection with Azure?
All of our SDKs, including the web SDK, includes "reliable transport" or "robust transport". When the server or device loses connection with Azure, telemetry is stored locally on the file system (Server SDKs) or in HTML5 Session Storage (Web SDK). The SDK will periodically retry to send this telemetry until our ingestion service considers it "stale" (48-hours for logs, 30 minutes for metrics). Stale telemetry will be dropped. In some cases, such as when local storage is full, retry will not occur.
Could personal data be sent in the telemetry?
You can send personal data if your code sends such data. It can also happen if variables in stack traces include personal data. Your development team should conduct risk assessments to ensure that personal data is properly handled. Learn more about data retention and privacy.
All octets of the client web address are always set to 0 after the geo location attributes are looked up.
The Application Insights JavaScript SDK doesn't include any personal data in its autocompletion by default. However, some personal data used in your application may be picked up by the SDK (for example, full names in window.title
or account IDs in XHR URL query parameters). For custom personal data masking, add a telemetry initializer.
My Instrumentation Key is visible in my web page source.
- This visibility is common practice in monitoring solutions.
- It can't be used to steal your data.
- It could be used to skew your data or trigger alerts.
- We haven't heard that any customer has had such problems.
You could:
- Use two separate Instrumentation Keys (separate Application Insights resources), for client and server data. Or
- Write a proxy that runs in your server, and have the web client send data through that proxy.
How do I see POST data in Diagnostic search?
We don't log POST data automatically, but you can use a TrackTrace call: put the data in the message parameter. Message parameters have a longer size limit than the limits on string properties, though you can't filter on it.
Should I use single or multiple Application Insights resources?
Use a single resource for all the components or roles in a single business system. Use separate resources for development, test, and release versions, and for independent applications.
How do I dynamically change the instrumentation key?
What are the User and Session counts?
- The JavaScript SDK sets a user cookie on the web client, to identify returning users, and a session cookie to group activities.
- If there's no client-side script, you can set cookies at the server.
- If one real user uses your site in different browsers, or using in-private/incognito browsing, or different machines, then they will be counted more than once.
- To identify a logged-in user across machines and browsers, add a call to setAuthenticatedUserContext().
How do Application Insights generate device information (Browser, OS, Language, Model)?
The browser passes the User Agent string in the HTTP header of the request, and the Application Insights ingestion service uses UA Parser to generate the fields you see in the data tables and experiences. As a result, Application Insights users are unable to change these fields.
Occasionally this data may be missing or inaccurate if the user or enterprise disables sending User Agent in Browser settings. The UA Parser regexes may not include all device information or Application Insights may not have adopted the latest updates.
Have I enabled everything in Application Insights?
What you should see | How to get it | Why you want it |
---|---|---|
Availability charts | Web tests | Know your web app is up |
Server app perf: response times, ... | Add Application Insights to your project or Install Azure Monitor Application Insights Agent on server (or write your own code to track dependencies) | Detect perf issues |
Dependency telemetry | Install Azure Monitor Application Insights Agent on server | Diagnose issues with databases or other external components |
Get stack traces from exceptions | Insert TrackException calls in your code (but some are reported automatically) | Detect and diagnose exceptions |
Search log traces | Add a logging adapter | Diagnose exceptions, perf issues |
Client usage basics: page views, sessions, ... | JavaScript initializer in web pages | Usage analytics |
Client custom metrics | Tracking calls in web pages | Enhance user experience |
Server custom metrics | Tracking calls in server | Business intelligence |
Why are the counts in Search and Metrics charts unequal?
Sampling reduces the number of telemetry items (requests, custom events, and so on) that are sent from your app to the portal. In Search, you see the number of items received. In metric charts that display a count of events, you see the number of original events that occurred.
Each item that is transmitted carries an itemCount
property that shows how many original events that item represents. To observe sampling in operation, you can run this query in Analytics:
requests | summarize original_events = sum(itemCount), transmitted_events = count()
How do I move an Application Insights resource to a new region?
Moving existing Application Insights resources from one region to another is currently not supported. Historical data that you have collected can't be migrated to a new region. The only partial workaround is to:
- Create a brand new Application Insights resource (classic or workspace-based) in the new region.
- Recreate all unique customizations specific to the original resource in the new resource.
- Modify your application to use the new region resource's instrumentation key or connection string.
- Test to confirm that everything is continuing to work as expected with your new Application Insights resource.
- At this point you can either keep or delete the original Application Insights resource. Deleting a classic Application Insights resource will result in all historical data being lost. If the original resource was workspace-based, its data will remain in the Log Analytics. Keeping the original Application Insights resource will allow you to access its historical data until its data retention settings run out.
Unique customizations that commonly need to be manually recreated or updated for the resource in the new region include but aren't limited to:
- Recreate custom dashboards and workbooks.
- Recreate or update the scope of any custom log/metric alerts.
- Recreate availability alerts.
- Recreate any custom Azure role-based access control (Azure RBAC) settings that are required for your users to access the new resource.
- Replicate settings involving ingestion sampling, data retention, daily cap, and custom metrics enablement. These settings are controlled via the Usage and estimated costs pane.
- Any integration that relies on API keys such as release annotations, live metrics secure control channel etc. You'll need to generate new API keys and update the associated integration.
- Continuous export in classic resources would need to be configured again.
- Diagnostic settings in workspace-based resources would need to be configured again.
Note
If the resource you're creating in a new region is replacing a classic resource we recommend exploring the benefits of creating a new workspace-based resource or alternatively migrating your existing resource to workspace-based.
Automation
Configuring Application Insights
You can write PowerShell scripts using Azure Resource Monitor to:
- Create and update Application Insights resources.
- Set the pricing plan.
- Get the instrumentation key.
- Add a metric alert.
- Add an availability test.
You can't set up a Metric Explorer report or set up continuous export.
Querying the telemetry
How can I set an alert on an event?
Azure alerts are only on metrics. Create a custom metric that crosses a value threshold whenever your event occurs. Then set an alert on the metric. You'll get a notification whenever the metric crosses the threshold in either direction; you won't get a notification until the first crossing, no matter whether the initial value is high or low; there's always a latency of a few minutes.
Are there data transfer charges between an Azure web app and Application Insights?
- If your Azure web app is hosted in a data center where there's an Application Insights collection endpoint, there's no charge.
- If there's no collection endpoint in your host data center, then your app's telemetry will incur Azure outgoing charges.
This answer depends on the distribution of our endpoints, not on where your Application Insights resource is hosted.
Can I send telemetry to the Application Insights portal?
We recommend you use our SDKs and use the SDK API. There are variants of the SDK for various platforms. These SDKs handle buffering, compression, throttling, retries, and so on. However, the ingestion schema and endpoint protocol are public.
Can I monitor an intranet web server?
Yes, but you'll need to allow traffic to our services by either firewall exceptions or proxy redirects.
- QuickPulse
https://rt.services.visualstudio.com:443
- ApplicationIdProvider
https://dc.services.visualstudio.com:443
- TelemetryChannel
https://dc.services.visualstudio.com:443
Review our full list of services and IP addresses here.
Firewall exception
Allow your web server to send telemetry to our endpoints.
Gateway redirect
Route traffic from your server to a gateway on your intranet by overwriting Endpoints in your configuration. If these "Endpoint" properties aren't present in your config, these classes will use the default values shown below in the example ApplicationInsights.config.
Your gateway should route traffic to our endpoint's base address. In your configuration, replace the default values with http://<your.gateway.address>/<relative path>
.
Example ApplicationInsights.config with default endpoints:
<ApplicationInsights>
...
<TelemetryModules>
<Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse.QuickPulseTelemetryModule, Microsoft.AI.PerfCounterCollector">
<QuickPulseServiceEndpoint>https://rt.services.visualstudio.com/QuickPulseService.svc</QuickPulseServiceEndpoint>
</Add>
</TelemetryModules>
...
<TelemetryChannel>
<EndpointAddress>https://dc.services.visualstudio.com/v2/track</EndpointAddress>
</TelemetryChannel>
...
<ApplicationIdProvider Type="Microsoft.ApplicationInsights.Extensibility.Implementation.ApplicationId.ApplicationInsightsApplicationIdProvider, Microsoft.ApplicationInsights">
<ProfileQueryEndpoint>https://dc.services.visualstudio.com/api/profiles/{0}/appId</ProfileQueryEndpoint>
</ApplicationIdProvider>
...
</ApplicationInsights>
Note
ApplicationIdProvider is available starting in v2.6.0.
Proxy passthrough
Proxy passthrough can be achieved by configuring either a machine level or application level proxy. For more information see dotnet's article on DefaultProxy.
Example Web.config:
<system.net>
<defaultProxy>
<proxy proxyaddress="http://xx.xx.xx.xx:yyyy" bypassonlocal="true"/>
</defaultProxy>
</system.net>
Can I run Availability web tests on an intranet server?
Our web tests run on points of presence that are distributed around the globe. There are two solutions:
- Firewall door - Allow requests to your server from the long and changeable list of web test agents.
- Write your own code to send periodic requests to your server from inside your intranet. You could run Visual Studio web tests for this purpose. The tester could send the results to Application Insights using the TrackAvailability() API.
How long does it take for telemetry to be collected?
Most Application Insights data has a latency of under 5 minutes. Some data can take longer; typically larger log files. For more information, see the Application Insights SLA.
HTTP 502 and 503 responses aren't always captured by Application Insights
"502 bad gateway" and "503 service unavailable" errors aren't always captured by Application Insights. If only client-side JavaScript is being used for monitoring this would be expected behavior since the error response is returned prior to the page containing the HTML header with the monitoring JavaScript snippet being rendered.
If the 502 or 503 response was sent from a server with server-side monitoring enabled the errors would be collected by the Application Insights SDK.
However, there are still cases where even when server-side monitoring is enabled on an application's web server that a 502 or 503 error will not be captured by Application Insights. Many modern web servers do not allow a client to communicate directly, but instead employ solutions like reverse proxies to pass information back and forth between the client and the front-end web servers.
In this scenario, a 502 or 503 response could be returned to a client due to an issue at the reverse proxy layer and would not be captured out-of-box by Application Insights. To help detect issues at this layer, you may need to forward logs from your reverse proxy to Log Analytics and create a custom rule to check for 502/503 responses. To learn more about common causes of 502 and 503 errors consult the Azure App Service troubleshooting article for "502 bad gateway" and "503 service unavailable".
OpenTelemetry
What is OpenTelemetry?
A new open-source standard for observability. Learn more at https://opentelemetry.io/.
Why is Microsoft Azure Monitor Application Insights investing in OpenTelemetry?
Microsoft is among the largest contributors to OpenTelemetry.
The key value propositions of OpenTelemetry are that it's vendor-neutral and provides consistent APIs/SDKs across languages.
Over time, we believe OpenTelemetry will enable Azure Monitor's customers to observe applications written in languages beyond our supported languages and expand the number of instrumentation libraries available to customers. In particular, the OpenTelemetry .NET SDK is more performant at scale than its predecessor the Application Insights SDK.
Finally, OpenTelemetry aligns with Microsoft's strategy to embrace open source.
What is the status of OpenTelemetry?
A complete observability solution includes all three pillars of observability. The OpenTelemetry Community released stable Distributed Tracing in February 2021 and released stable Metrics in March 2022. Azure Monitor's OpenTelemetry-based offering includes these two pillars. The OpenTelemetry Community is actively working to stabilize the Logging API/SDK specification, and we plan to add logging to our OpenTelemetry-based offerings in a subsequent milestone.
How can I test out OpenTelemetry?
Check out our enablement docs for .NET, Java, JavaScript (Node.js), Python, and sign up to join our Azure Monitor Application Insights early adopter community at https://aka.ms/AzMonOtel to get notified of major releases.
What is the current release state of features within each OpenTelemetry offering?
The following chart breaks out OpenTelemetry feature support for each language.
Feature | .NET | Node.js | Python | Java |
---|---|---|---|---|
Distributed Tracing | ⚠️ | ⚠️ | ⚠️ | ✅ |
Custom Metrics | ⚠️ | ⚠️ | ⚠️ | ✅ |
Standard Metrics (accuracy currently affected by sampling) | ⚠️ | ⚠️ | ⚠️ | ✅ |
Fixed-Rate Sampling | ⚠️ | ⚠️ | ⚠️ | ✅ |
Offline Storage & Automatic Retries | ⚠️ | ⚠️ | ⚠️ | ✅ |
Exception Reporting | ⚠️ | ⚠️ | ⚠️ | ✅ |
Logs Collection | ❌ | ❌ | ❌ | ✅ |
Azure Active Directory (AAD) authentication | ❌ | ❌ | ❌ | ✅ |
Auto-populate cloud role name/role instance on Azure | ❌ | ❌ | ❌ | ✅ |
Live Metrics | ❌ | ❌ | ❌ | ✅ |
Autopopulation of User ID, Authenticated User ID, and User IP | ❌ | ❌ | ❌ | ✅ |
Manually override/set Operation Name, User ID or Authenticated User ID | ❌ | ❌ | ❌ | ✅ |
Adaptive Sampling | ❌ | ❌ | ❌ | ✅ |
Profiler | ❌ | ❌ | ❌ | ⚠️ |
Snapshot Debugger | ❌ | ❌ | ❌ | ❌ |
Key
- ✅ : This feature is available to all customers with formal support.
- ⚠️ : This feature is available as a public preview. Supplemental Terms of Use for Microsoft Azure Previews
- ❌ : This feature is not available or not applicable.
Can OpenTelemetry be used for web browsers?
Yes but it’s not recommended or supported by Azure. OpenTelemetry JavaScript is heavily optimized for Node.js. Instead, we recommend using the Application Insights JavaScript SDK.
When can we expect the OpenTelemetry SDK to be available for use in web browsers?
The availability timeline for the OpenTelemetry Web SDK hasn’t been determined yet, but we are likely several years away from a browser SDK that will be a viable alternative to the Application Insights JavaScript SDK.
Can I test OpenTelemetry in a web browser today?
The OpenTelemetry Web Sandbox is a fork designed to make OpenTelemetry work in a browser, but it is not yet possible to send telemetry to Application Insights or 1DS Collector. Additionally, the SDK does not currently have defined general client events.
How do I determine if OpenTelemetry is right for me?
The OpenTelemetry community uses stable or experimental to signal the maturity of a piece of software. Separately, Azure Monitor uses "Public Preview" and "GA" to signal stability and support commitment.
If your application is written in Java, we recommend our OpenTelemetry-based offering for everyone which GA'd in November 2020.
If your application is written in C#, JavaScript (Node.js), or Python, the current Application Insights SDKs offer the most feature-rich experience.
Scenarios that may sway you toward OpenTelemetry sooner-than-later include sending telemetry to Azure Monitor + another vendor simultaneously, collecting and converging existing instrumentation protocols, or leveraging features available in the OpenTelemetry-Collector. For example, customers have reported using the batch processor, tail-based sampler, and/or attributes processor. While similar features exist in the existing Application Insights SDKs, some customers prefer to host this processing downstream in an agent.
You can see our progress toward OpenTelemetry-Based Azure Monitor exporters for C#, JavaScript, and Python in open-source repositories.
Is running Application Insights alongside competitor agents supported? (for example, AppDynamics, DataDog, NewRelic, etc.)
No. This isn't something we plan to test or support, though our OpenTelemetry-based offerings allow you to export to an OTLP endpoint alongside Azure Monitor simultaneously.
Can I use Preview builds in production environments?
It's not recommended. See Supplemental Terms of Use for Microsoft Azure Previews for more information.
Does Azure Monitor have an "OpenTelemetry Distro"?
OpenTelemetry defines distribution as a "wrapper around an upstream OpenTelemetry repository with some customizations". In this sense, our Java, Python, and JavaScript (Node.js) OpenTelemetry-based offerings are a "distro" because we package several components together for easy enablement. These distros include Azure-specific components (e.g., Azure Monitor Exporter with offline storage, custom sampler) to achieve the best experience in Azure Monitor Application Insights. In some cases, customers may wish to use the "piecemeal approach" to instrument rather than the distro. In these cases, customers can use our exporter directly, and they will take responsibility to pull in all the right packages for their implementation. At this time, .NET only offers the OpenTelemetry exporter for the piecemeal approach, but in the future it will also have a distro to cover common telemetry scenarios.
What's the difference between manual and auto-instrumentation?
Manual instrumentation is coding against the OpenTelemetry API and typically consists of installing a language-specific SDK in an application. “Manual” doesn't mean you’ll be required to write complex code to define spans for distributed traces (though it remains an option). A rich and growing set of instrumentation libraries maintained by OpenTelemetry contributors will enable you to effortlessly capture telemetry signals across common frameworks and libraries.
Auto-instrumentation is enabling telemetry collection through configuration without touching the application's code. While highly convenient, it tends to be less configurable and it's not available in all languages.
OpenTelemetry auto-instrumentation efforts include a Java offering, which Microsoft supports via a distro called Java 3.X. Python and .NET have experimental auto-instrumentation efforts, which Microsoft does not currently support. All other OpenTelemetry languages are focused on Manual Instrumentation only.
Can I use the OpenTelemetry-Collector?
Some customers have begun to use the OpenTelemetry-Collector as an agent alternative even though Microsoft doesn’t officially support an agent-based approach for application monitoring yet. In the meantime, the open source community has contributed an OpenTelemetry-Collector Azure Monitor Exporter that some customers are using to send data to Azure Monitor Application Insights.
We plan to support an agent-based approach in the future, though the details and timeline aren't available yet. Our objective is to provide a path for any OpenTelemetry supported language to send to Azure Monitor via OpenTelemetry Protocol (OTLP). This will enable customers to observe applications written in languages beyond our supported languages.
What's the difference between OpenCensus and OpenTelemetry?
OpenCensus is the precursor to OpenTelemetry. Microsoft helped bring together OpenTracing and OpenCensus to create OpenTelemetry, a single observability standard for the world. Azure Monitor's current production-recommended Python SDK is based on OpenCensus, but eventually all Azure Monitor's SDKs will be based on OpenTelemetry.
Container insights
What does 'Other Processes' represent under the Node view?
Other processes are intended to help you clearly understand the root cause of the high resource usage on your node. This enables you to distinguish usage between containerized processes vs non-containerized processes.
What are these Other Processes?
These are non-containerized processes that run on your node.
How do we calculate this?
Other Processes = Total usage from CAdvisor - Usage from containerized process
The Other processes includes:
- Self-managed or managed Kubernetes non-containerized processes
- Container Run-time processes
- Kubelet
- System processes running on your node
- Other non-Kubernetes workloads running on node hardware or VM
I don't see Image and Name property values populated when I query the ContainerLog table.
For agent version ciprod12042019 and later, by default these two properties aren't populated for every log line to minimize cost incurred on log data collected. There are two options to query the table that include these properties with their values:
Option 1
Join other tables to include these property values in the results.
Modify your queries to include Image and ImageTag properties from the ContainerInventory
table by joining on ContainerID property. You can include the Name property (as it previously appeared in the ContainerLog
table) from KubepodInventory table's ContaineName field by joining on the ContainerID property. This is the recommended option.
The following example is a sample detailed query that explains how to get these field values with joins.
//lets say we are querying an hour worth of logs
let startTime = ago(1h);
let endTime = now();
//below gets the latest Image & ImageTag for every containerID, during the time window
let ContainerInv = ContainerInventory | where TimeGenerated >= startTime and TimeGenerated < endTime | summarize arg_max(TimeGenerated, *) by ContainerID, Image, ImageTag | project-away TimeGenerated | project ContainerID1=ContainerID, Image1=Image ,ImageTag1=ImageTag;
//below gets the latest Name for every containerID, during the time window
let KubePodInv = KubePodInventory | where ContainerID != "" | where TimeGenerated >= startTime | where TimeGenerated < endTime | summarize arg_max(TimeGenerated, *) by ContainerID2 = ContainerID, Name1=ContainerName | project ContainerID2 , Name1;
//now join the above 2 to get a 'jointed table' that has name, image & imagetag. Outer left is safer in-case there are no kubepod records are if they are latent
let ContainerData = ContainerInv | join kind=leftouter (KubePodInv) on $left.ContainerID1 == $right.ContainerID2;
//now join ContainerLog table with the 'jointed table' above and project-away redundant fields/columns and rename columns that were re-written
//Outer left is safer so you don't lose logs even if we can't find container metadata for loglines (due to latency, time skew between data types etc...)
ContainerLog
| where TimeGenerated >= startTime and TimeGenerated < endTime
| join kind= leftouter (
ContainerData
) on $left.ContainerID == $right.ContainerID2 | project-away ContainerID1, ContainerID2, Name, Image, ImageTag | project-rename Name = Name1, Image=Image1, ImageTag=ImageTag1
Option 2
Re-enable collection for these properties for every container log line.
If the first option isn't convenient due to query changes involved, you can re-enable collecting these fields by enabling the setting log_collection_settings.enrich_container_logs
in the agent config map as described in the data collection configuration settings.
Note
The second option isn't recommended with large clusters that have more than 50 nodes because it generates API server calls from every node in the cluster to perform this enrichment. This option also increases data size for every log line collected.
Can I view metrics collected in Grafana?
Container insights support viewing metrics stored in your Log Analytics workspace in Grafana dashboards. We have provided a template that you can download from Grafana's dashboard repository to get you started and reference to help you learn how to query data from your monitored clusters to visualize in custom Grafana dashboards.
Can I monitor my AKS-engine cluster with Container insights?
Container insights supports monitoring container workloads deployed to AKS-engine (formerly known as ACS-engine) cluster(s) hosted on Azure. For more information and an overview of steps required to enable monitoring for this scenario, see Using Container insights for AKS-engine.
Why don't I see data in my Log Analytics workspace?
If you're unable to see any data in the Log Analytics workspace at a certain time everyday, you may have reached the default 500 MB limit, or the daily cap specified to control the amount of data to collect daily. When the limit is met for the day, data collection stops and resumes only on the next day. To review your data usage and update to a different pricing tier based on your anticipated usage patterns, see Azure Monitor Logs pricing details.
What are the container states specified in the ContainerInventory table?
The ContainerInventory table contains information about both stopped and running containers. The table is populated by a workflow inside the agent that queries the docker for all the containers (running and stopped), and forwards that data the Log Analytics workspace.
How do I resolve 'Missing Subscription registration' error?
If you receive the error Missing Subscription registration for Microsoft.OperationsManagement, you can resolve it by registering the resource provider Microsoft.OperationsManagement in the subscription where the workspace is defined. The documentation for how to do this can be found here.
Is there support for Kubernetes RBAC enabled AKS clusters?
The Container Monitoring solution doesn't support Kubernetes RBAC, but it's supported with Container insights. The solution details page may not show the right information in the panes that show data for these clusters.
How do I enable log collection for containers in the kube-system namespace through Helm?
The log collection from containers in the kube-system namespace is disabled by default. Log collection can be enabled by setting an environment variable on the Azure Monitor Agent. For more information, see the Container insights GitHub page.
How do I update the Azure Monitor Agent in Container Insights to the latest released version?
To learn how to upgrade the agent, see Agent management.
Why are log lines larger than 16 KB split into multiple records in Log Analytics?
The agent uses the Docker JSON file logging driver to capture the stdout and stderr of containers. This logging driver splits log lines larger than 16 KB into multiple lines when copied from stdout or stderr to a file.
How do I enable multi-line logging?
Currently Container insights don't support multi-line logging, but there are workarounds available. You can configure all the services to write in JSON format and then Docker/Moby will write them as a single line.
For example, you can wrap your log as a JSON object as shown in the example below for a sample Node.js application:
console.log(json.stringify({
"Hello": "This example has multiple lines:",
"Docker/Moby": "will not break this into multiple lines",
"and you'll receive":"all of them in log analytics",
"as one": "log entry"
}));
This data will look like the following example in Azure Monitor for logs when you query for it:
LogEntry : ({"Hello": "This example has multiple lines:","Docker/Moby": "will not break this into multiple lines", "and you'll receive":"all of them in log analytics", "as one": "log entry"}
For a detailed look at the issue, review the following GitHub link.
How do I resolve Azure AD errors when I enable live logs?
You may see the following error: The reply url specified in the request does not match the reply urls configured for the application: '<application ID>'. The solution to solve it can be found in the article How to view container data in real time with Container insights.
Why can't I upgrade cluster after onboarding?
If after you enable Container insights for an AKS cluster, you delete the Log Analytics workspace the cluster was sending its data to, when attempting to upgrade the cluster it will fail. To work around this, you'll have to disable monitoring, and then re-enable it referencing a different valid workspace in your subscription. When you try to perform the cluster upgrade again, it should process and complete successfully.
Which ports and domains do I need to open/allow for the agent?
See the Network firewall requirements for the proxy and firewall configuration information required for the containerized agent with Azure, Azure US Government, and Azure China 21Vianet clouds.
Is there support for collecting of Kubernetes Audit logs for ARO clusters?
No, Container Insights don’t support collection of Kubernetes Audit Logs.
Why don’t I see Normal Event Types when I query the KubeEvents table?
By default, Normal Event Types aren't collected unless collect_all_kube_events configmap setting is enabled. If you need the collection of Normal events, enable collect_all_kube_events setting in container-azm-ms-agentconfig configmap. See Configure agent data collection for Container insights for details on configuring the configmap.
VM insights
Can I onboard to an existing workspace?
If your virtual machines are already connected to a Log Analytics workspace, you may continue to use that workspace when onboarding to VM insights, provided it's in one of the supported regions.
Can I onboard to a new workspace?
If your VMs aren't currently connected to an existing Log Analytics workspace, you need to create a new workspace to store your data. Creating a new default workspace is done automatically if you configure a single Azure VM for VM insights through the Azure portal.
If you choose to use the script-based method, these steps are covered in the Enable VM insights using Azure PowerShell or Resource Manager template article.
What do I do if my VM is already reporting to an existing workspace?
If you're already collecting data from your virtual machines, you may have already configured it to report data to an existing Log Analytics workspace. As long as that workspace is in one of our supported regions, you can enable VM insights to that pre-existing workspace. If the workspace you're already using isn't in one of our supported regions, you won't be able to onboard to VM insights at this time. We are actively working to support more regions.
Why did my VM fail to onboard?
The following steps occur onboarding an Azure VM from the Azure portal:
- A default Log Analytics workspace is created, if that option was selected.
- The Log Analytics agent is installed on Azure VMs using a VM extension, if determined it's required.
- The VM insights Map Dependency agent is installed on Azure VMs using an extension, if determined it's required.
During the onboard process, we check for status on each of the above to return a notification status to you in the portal. Configuration of the workspace and the agent installation typically takes 5 to 10 minutes. Viewing monitoring data in the portal take an extra 5 to 10 minutes.
If you've initiated onboarding and see messages indicating the VM needs to be onboarded, allow for up to 30 minutes for the VM to complete the process.
I don't see some or any data in the performance charts for my VM
If you don't see performance data in the disk table or in some of the performance charts, then your performance counters may not be configured in the workspace. To resolve, run the following PowerShell script.
How is VM insights Map feature different from Service Map?
The VM insights Map feature is based on Service Map, but has the following differences:
- The Map view can be accessed from the VM pane and from VM insights under Azure Monitor.
- The connections in the Map are now clickable and display a view of the connection metric data in the side panel for the selected connection.
- There's a new API that is used to create the maps to better support more complex maps.
- Monitored VMs are now included in the client group node, and the donut chart shows the proportion of monitored vs unmonitored virtual machines in the group. It can also be used to filter the list of machines when the group is expanded.
- Monitored virtual machines are now included in the server port group nodes, and the donut chart shows the proportion of monitored vs unmonitored machines in the group. It can also be used to filter the list of machines when the group is expanded.
- The map style has been updated to be more consistent with App Map from Application insights.
- The side panels have been updated, and do not have the full set of integration's that were supported in Service Map - Update Management, Change Tracking, Security, and Service Desk.
- The option for choosing groups and machines to map has been updated and now supports Subscriptions, Resource Groups, Azure Virtual Machine Scale Sets, and Cloud services.
- You can't create new Service Map machine groups in the VM insights Map feature.
Why do my performance charts show dotted lines?
This can occur for a few reasons. In cases where there's a gap in data collection we depict the lines as dotted. If you've modified the data sampling frequency for the performance counters enabled (the default setting is to collect data every 60 seconds), you can see dotted lines in the chart if you choose a narrow time range for the chart and your sampling frequency is less than the bucket size used in the chart (for example, the sampling frequency is every 10 minutes and each bucket on the chart is 5 minutes). Choosing a wider time range to view should cause the chart lines to appear as solid lines rather than dots in this case.
Are groups supported with VM insights?
Yes, once you install the Dependency agent we collect information from the VMs to display groups based upon subscription, resource group, Virtual Machine Scale Sets, and cloud services. If you've been using Service Map and have created machine groups, these are displayed as well. Computer groups will also appear in the groups filter if you've created them for the workspace you're viewing.
How do I see the details for what is driving the 95th percentile line in the aggregate performance charts?
By default, the list is sorted to show you the VMs that have the highest value for the 95th percentile for the selected metric, except for the Available memory chart, which shows the machines with the lowest value of the 5th percentile. Clicking on the chart will open the Top N List view with the appropriate metric selected.
How does the Map feature handle duplicate IPs across different vnets and subnets?
If you're duplicating IP ranges either with VMs or Azure Virtual Machine Scale Sets across subnets and vnets, it can cause VM insights Map to display incorrect information. This is a known issue and we are investigating options to improve this experience.
Does Map feature support IPv6?
Map feature currently only supports IPv4 and we're investigating support for IPv6. We also support IPv4 that is tunneled inside IPv6.
When I load a map for a Resource Group or other large group the map is difficult to view
While we have made improvements to Map to handle large and complex configurations, we realize a map can have a lot of nodes, connections, and node working as a cluster. We are committed to continuing to enhance support to increase scalability.
Why does the network chart on the Performance tab look different than the network chart on the Azure VM Overview page?
The overview page for an Azure VM displays charts based on the host's measurement of activity in the guest VM. For the network chart on the Azure VM Overview, it only displays network traffic that will be billed. This does not include inter-virtual network traffic. The data and charts shown for VM insights is based on data from the guest VM and the network chart displays all TCP/IP traffic that is inbound and outbound to that VM, including inter-virtual network.
How is response time measured for data stored in VMConnection and displayed in the connection panel and workbooks?
Response time is an approximation. Since we don't instrument the code of the application, we don't really know when a request begins and when the response arrives. Instead we observe data being sent on a connection and then data coming back on that connection. Our agent keeps track of these sends and receives and attempts to pair them: a sequence of sends, followed by a sequence of receives, is interpreted as a request/response pair. The timing between these operations is the response time. It will include the network latency and the server processing time.
This approximation works well for protocols that are request/response based: a single request goes out on the connection, and a single response arrives. This is the case for HTTP(S) (without pipelining), but not satisfied for other protocols.
Are there limitations if I am on the Log Analytics Free pricing plan?
If you've configured Azure Monitor with a Log Analytics workspace using the Free pricing tier, VM insights Map feature will only support five connected machines connected to the workspace. If you have five VMs connected to a free workspace, you disconnect one of the VMs and then later connect a new VM, the new VM isn't monitored and reflected on the Map page.
Under this condition, you'll be prompted with the Try Now option when you open the VM and select Insights from the left-hand pane, even after it has been installed already on the VM. However, you're not prompted with options as would normally occur if this VM were not onboarded to VM insights.
SQL Insights (preview)
What versions of SQL Server are supported?
We support SQL Server 2012 and all newer versions. See Supported versions for more details.
What SQL resource types are supported?
- Azure SQL Database
- Azure SQL Managed Instance
- SQL Server on Azure Virtual Machines (SQL Server running on virtual machines registered with the SQL virtual machine provider)
- Azure VMs (SQL Server running on virtual machines not registered with the SQL virtual machine provider)
See Supported versions for more details and for details about scenarios with no support or limited support.
What operating systems for the virtual machine running SQL Server are supported?
We support all operating systems specified by the Windows and Linux documentation for SQL Server on Azure Virtual Machines.
What operating system for the monitoring virtual machine are supported?
Ubuntu 18.04 is currently the only operating system supported for the monitoring virtual machine.
Where will the monitoring data be stored in Log Analytics?
All of the monitoring data is stored in the InsightsMetrics table. The Origin column has the value solutions.azm.ms/telegraf/SqlInsights
. The Namespace column has values that start with sqlserver_
.
How often is data collected?
The frequency of data collection is customizable. See Data collected by SQL Insights (preview) for details on the default frequencies and see Create SQL monitoring profile for instructions on customizing frequencies.
Next steps
If your question isn't answered here, you can refer to the following forums for more questions and answers.
For general feedback on Azure Monitor please visit the feedback forum.