Diagnostic settings in Azure Monitor
This article provides details on creating and configuring diagnostic settings to send Azure platform metrics and logs to different destinations.
Platform metrics are sent automatically to Azure Monitor Metrics by default and without configuration.
Platform logs provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on:
- Resource logs aren't collected until they're routed to a destination.
- Activity logs exist on their own but can be routed to other locations.
Each Azure resource requires its own diagnostic setting, which defines the following criteria:
- Sources: The type of metric and log data to send to the destinations defined in the setting. The available types vary by resource type.
- Destinations: One or more destinations to send to.
A single diagnostic setting can define no more than one of each of the destinations. If you want to send data to more than one of a particular destination type (for example, two different Log Analytics workspaces), create multiple settings. Each resource can have up to five diagnostic settings.
If you need to delete a resource, you should first delete its diagnostic settings. Otherwise, if you recreate this resource, the diagnostic settings for the deleted resource could be included with the new resource, depending on the resource configuration for each resource. If the diagnostics settings are included with the new resource, this resumes the collection of resource logs as defined in the diagnostic setting and sends the applicable metric and log data to the previously configured destination.
Also, it’s a good practice to delete the diagnostic settings for a resource you're going to delete and don't plan on using again to keep your environment clean.
The following video walks you through routing resource platform logs with diagnostic settings. The video was done at an earlier time. Be aware of the following changes:
- There are now four destinations. You can send platform metrics and logs to certain Azure Monitor partners.
- A new feature called category groups was introduced in November 2021.
Information on these newer features is included in this article.
There are three sources for diagnostic information:
- Resource Logs
- Activity logs
The AllMetrics setting routes a resource's platform metrics to other destinations. This option might not be present for all resource providers.
With logs, you can select the log categories you want to route individually or choose a category group.
Category groups don't apply to metrics. Not all resources have category groups available.
You can use category groups to dynamically collect resource logs based on predefined groupings instead of selecting individual log categories. Microsoft defines the groupings to help monitor specific use cases across all Azure services.
Over time, the categories in the group might be updated as new logs are rolled out or as assessments change. When log categories are added or removed from a category group, your log collection is modified automatically without you having to update your diagnostic settings.
When you use category groups, you:
- No longer can individually select resource logs based on individual category types.
- No longer can apply retention settings to logs sent to Azure Storage.
Currently, there are two category groups:
- All: Every resource log offered by the resource.
- Audit: All resource logs that record customer interactions with data or the settings of the service. Note that Audit logs are an attempt by each resource provider to provide the most relevant audit data, but may not be considered sufficient from an auditing standards perspective.
Note : Enabling Audit for Azure SQL Database does not enable auditing for Azure SQL Database. To enable database auditing, you have to enable it from the auditing blade for Azure Database.
See the Activity log settings section.
Platform logs and metrics can be sent to the destinations listed in the following table.
To ensure the security of data in transit, we strongly encourage you to configure Transport Layer Security (TLS). All destination endpoints support TLS 1.2.
|Log Analytics workspace||Metrics are converted to log form. This option might not be available for all resource types. Sending them to the Azure Monitor Logs store (which is searchable via Log Analytics) helps you to integrate them into queries, alerts, and visualizations with existing log data.|
|Azure Storage account||Archiving logs and metrics to a Storage account is useful for audit, static analysis, or backup. Compared to using Azure Monitor Logs or a Log Analytics workspace, Storage is less expensive, and logs can be kept there indefinitely.|
|Azure Event Hubs||When you send logs and metrics to Event Hubs, you can stream data to external systems such as third-party SIEMs and other Log Analytics solutions.|
|Azure Monitor partner integrations||Specialized integrations can be made between Azure Monitor and other non-Microsoft monitoring platforms. Integration is useful when you're already using one of the partners.|
Activity log settings
The activity log uses a diagnostic setting but has its own user interface because it applies to the whole subscription rather than individual resources. The destination information listed here still applies. For more information, see Azure activity log.
Requirements and limitations
This section discusses requirements and limitations.
Time before telemetry gets to destination
Once you have set up a diagnostic setting, data should start flowing to your selected destination(s) within 90 minutes. If you get no information within 24 hours, then either
- no logs are being generated or
- something is wrong in the underlying routing mechanism. Try disabling the configuration and then reenabling it. Contact Azure support through the Azure portal if you continue to have issues.
Metrics as a source
There are certain limitations with exporting metrics:
- Sending multi-dimensional metrics via diagnostic settings isn't currently supported: Metrics with dimensions are exported as flattened single-dimensional metrics, aggregated across dimension values. For example, the IOReadBytes metric on a blockchain can be explored and charted on a per-node level. However, when exported via diagnostic settings, the metric exported shows all read bytes for all nodes.
- Not all metrics are exportable with diagnostic settings: Because of internal limitations, not all metrics are exportable to Azure Monitor Logs or Log Analytics. For more information, see the Exportable column in the list of supported metrics.
To get around these limitations for specific metrics, you can manually extract them by using the Metrics REST API. Then you can import them into Azure Monitor Logs by using the Azure Monitor Data Collector API.
Any destinations for the diagnostic setting must be created before you create the diagnostic settings. The destination doesn't have to be in the same subscription as the resource sending logs if the user who configures the setting has appropriate Azure role-based access control access to both subscriptions. By using Azure Lighthouse, it's also possible to have diagnostic settings sent to a workspace, storage account, or event hub in another Azure Active Directory tenant.
The following table provides unique requirements for each destination including any regional restrictions.
|Log Analytics workspace||The workspace doesn't need to be in the same region as the resource being monitored.|
|Storage account||Don't use an existing storage account that has other, non-monitoring data stored in it so that you can better control access to the data. If you're archiving the activity log and resource logs together, you might choose to use the same storage account to keep all monitoring data in a central location.
To send the data to immutable storage, set the immutable policy for the storage account as described in Set and manage immutability policies for Azure Blob Storage. You must follow all steps in this linked article including enabling protected append blobs writes.
The storage account needs to be in the same region as the resource being monitored if the resource is regional.
Diagnostic settings can't access storage accounts when virtual networks are enabled. You must enable Allow trusted Microsoft services to bypass this firewall setting in storage accounts so that the Azure Monitor diagnostic settings service is granted access to your storage account.
Azure DNS zone endpoints (preview) and Azure Premium LRS (locally redundant storage) storage accounts are not supported as a log or metric destination.
|Event Hubs||The shared access policy for the namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the diagnostic setting to include streaming, you must have the ListKey permission on that Event Hubs authorization rule.
The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional.
Diagnostic settings can't access Event Hubs resources when virtual networks are enabled. You must enable Allow trusted Microsoft services to bypass this firewall setting in Event Hubs so that the Azure Monitor diagnostic settings service is granted access to your Event Hubs resources.
|Partner integrations||The solutions vary by partner. Check the Azure Monitor partner integrations documentation for details.|
There is a cost for collecting data in a Log Analytics workspace, so you should only collect the categories you require for each service. The data volume for resource logs varies significantly between services,
You might also not want to collect platform metrics from Azure resources because this data is already being collected in Metrics. Only configure your diagnostic data to collect metrics if you need metric data in the workspace for more complex analysis with log queries.
Diagnostic settings don't allow granular filtering of resource logs. You might require certain logs in a particular category but not others. Or you may want to remove unneeded columns from the data. In these cases, use transformations on the workspace to filter logs that you don't require.
You can also use transformations to lower the storage requirements for records you want by removing columns without useful information. For example, you might have error events in a resource log that you want for alerting. But you might not require certain columns in those records that contain a large amount of data. You can create a transformation for the table that removes those columns.
For strategies to reduce your Azure Monitor costs, see Cost optimization and Azure Monitor.
Create diagnostic settings
You can create and edit diagnostic settings by using multiple methods.
You can configure diagnostic settings in the Azure portal either from the Azure Monitor menu or from the menu for the resource.
Where you configure diagnostic settings in the Azure portal depends on the resource:
For a single resource, select Diagnostic settings under Monitoring on the resource's menu.
For one or more resources, select Diagnostic settings under Settings on the Azure Monitor menu and then select the resource.
For the activity log, select Activity log on the Azure Monitor menu and then select Diagnostic settings. Make sure you disable any legacy configuration for the activity log. For instructions, see Disable existing settings.
If no settings exist on the resource you've selected, you're prompted to create a setting. Select Add diagnostic setting.
If there are existing settings on the resource, you see a list of settings already configured. Select Add diagnostic setting to add a new setting. Or select Edit setting to edit an existing one. Each setting can have no more than one of each of the destination types.
Give your setting a name if it doesn't already have one.
Logs and metrics to route: For logs, either choose a category group or select the individual checkboxes for each category of data you want to send to the destinations specified later. The list of categories varies for each Azure service. Select AllMetrics if you want to store metrics in Azure Monitor Logs too.
Destination details: Select the checkbox for each destination. Options appear so that you can add more information.
Log Analytics: Enter the subscription and workspace. If you don't have a workspace, you must create one before you proceed.
Event Hubs: Specify the following criteria:
- Subscription: The subscription that the event hub is part of.
- Event hub namespace: If you don't have one, you must create one.
- Event hub name (optional): The name to send all data to. If you don't specify a name, an event hub is created for each log category. If you're sending to multiple categories, you might want to specify a name to limit the number of event hubs created. For more information, see Azure Event Hubs quotas and limits.
- Event hub policy name (also optional): A policy defines the permissions that the streaming mechanism has. For more information, see Event Hubs features.
Storage: Select the Subscription, Storage account, and Retention policy.
Consider setting the retention policy to 0 and either use Azure Storage Lifecycle Policy or delete your data from storage by using a scheduled job. These strategies are likely to provide more consistent behavior.
First, if you're using storage for archiving, you generally want your data to be retained for more than 365 days.
Second, if you choose a retention policy that's greater than 0, the expiration date is attached to the logs at the time of storage. You can't change the date for those logs after they're stored.
For example, if you set the retention policy for WorkflowRuntime to 180 days and then 24 hours later you set it to 365 days, the logs stored during those first 24 hours will be automatically deleted after 180 days. All subsequent logs of that type will be automatically deleted after 365 days. Changing the retention policy later doesn't retain the first 24 hours of logs for 365 days.
Partner integration: You must first install partner integration into your subscription. Configuration options vary by partner. For more information, see Azure Monitor partner integrations.
After a few moments, the new setting appears in your list of settings for this resource. Logs are streamed to the specified destinations as new event data is generated. It might take up to 15 minutes between when an event is emitted and when it appears in a Log Analytics workspace.
Here are some troubleshooting tips.
Metric category isn't supported
When you deploy a diagnostic setting, you receive an error message similar to "Metric category 'xxxx' is not supported." You might receive this error even though your previous deployment succeeded.
The problem occurs when you use a Resource Manager template, REST API, the CLI, or Azure PowerShell. Diagnostic settings created via the Azure portal aren't affected because only the supported category names are presented.
The problem is caused by a recent change in the underlying API. Metric categories other than AllMetrics aren't supported and never were except for a few specific Azure services. In the past, other category names were ignored when deploying a diagnostic setting. The Azure Monitor back end redirected these categories to AllMetrics. As of February 2021, the back end was updated to specifically confirm the metric category provided is accurate. This change has caused some deployments to fail.
If you receive this error, update your deployments to replace any metric category names with AllMetrics to fix the issue. If the deployment was previously adding multiple categories, only keep one with the AllMetrics reference. If you continue to have the problem, contact Azure support through the Azure portal.
Setting disappears because of non-ASCII characters in resourceID
Diagnostic settings don't support resource IDs with non-ASCII characters. For example, consider the term Preproducción. Because you can't rename resources in Azure, your only option is to create a new resource without the non-ASCII characters. If the characters are in a resource group, you can move the resources under it to a new one. Otherwise, you'll need to re-create the resource.
Possibility of duplicated or dropped data
Every effort is made to ensure all log data is sent correctly to your destinations, however it's not possible guarantee 100% data transfer of logs between endpoints. Retries and other mechanisms are in place to work around these issues and attempt to ensure log data arrives at the endpoint.
Submit and view feedback for