Cost optimization in Azure Monitor
Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. Before you use this article, you should see Azure Monitor cost and usage to understand the different ways that Azure Monitor charges and how to view your monthly bill.
This article describes Cost optimization for Azure Monitor as part of the Azure Well-Architected Framework. This is a set of guiding tenets that can be used to improve the quality of a workload. The framework consists of five pillars of architectural excellence:
- Cost Optimization
- Operational Excellence
- Performance Efficiency
Azure Monitor Logs
- Determine whether to combine your operational data and your security data in the same Log Analytics workspace.
- Configure pricing tier for the amount of data that each Log Analytics workspace typically collects.
- Configure data retention and archiving.
- Configure tables used for debugging, troubleshooting, and auditing as Basic Logs.
- Limit data collection from data sources for the workspace.
- Regularly analyze collected data to identify trends and anomalies.
- Create an alert when data collection is high.
- Consider a daily cap as a preventative measure to ensure that you don't exceed a particular budget.
|Determine whether to combine your operational data and your security data in the same Log Analytics workspace.
|Since all data in a Log Analytics workspace is subject to Microsoft Sentinel pricing if Sentinel is enabled, there might be cost implications to combining this data. See Design a Log Analytics workspace strategy for details on making this decision for your environment balancing it with criteria in other pillars.
|Configure pricing tier for the amount of data that each Log Analytics workspace typically collects.
|By default, Log Analytics workspaces will use pay-as-you-go pricing with no minimum data volume. If you collect enough data, you can significantly decrease your cost by using a commitment tier, which allows you to commit to a daily minimum of data collected in exchange for a lower rate. If you collect enough data across workspaces in a single region, you can link them to a dedicated cluster and combine their collected volume using cluster pricing.
See Azure Monitor Logs cost calculations and options for details on commitment tiers and guidance on determining which is most appropriate for your level of usage. See Usage and estimated costs to view estimated costs for your usage at different pricing tiers.
|Configure data retention and archiving.
|There is a charge for retaining data in a Log Analytics workspace beyond the default of 31 days (90 days if Sentinel is enabled on the workspace and 90 days for Application insights data). Consider your particular requirements for having data readily available for log queries. You can significantly reduce your cost by configuring Archived Logs, which allows you to retain data for up to seven years and still access it occasionally using search jobs or restoring a set of data to the workspace.
|Configure tables used for debugging, troubleshooting, and auditing as Basic Logs.
|Tables in a Log Analytics workspace configured for Basic Logs have a lower ingestion cost in exchange for limited features and a charge for log queries. If you query these tables infrequently and don't use them for alerting, this query cost can be more than offset by the reduced ingestion cost.
|Limit data collection from data sources for the workspace.
|The primary factor for the cost of Azure Monitor is the amount of data that you collect in your Log Analytics workspace, so you should ensure that you collect no more data that you require to assess the health and performance of your services and applications. See Design a Log Analytics workspace architecture for details on making this decision for your environment balancing it with criteria in other pillars.
Tradeoff: There might be a tradeoff between cost and your monitoring requirements. For example, you might be able to detect a performance issue more quickly with a high sample rate, but you might want a lower sample rate to save costs. Most environments will have multiple data sources with different types of collection, so you need to balance your particular requirements with your cost targets for each. See Cost optimization in Azure Monitor for recommendations on configuring collection for different data sources.
|Regularly analyze collected data to identify trends and anomalies.
|Use Log Analytics workspace insights to periodically review the amount of data collected in your workspace. In addition to helping you understand the amount of data collected by different sources, it will identify anomalies and upward trends in data collection that could result in excess cost. Further analyze data collection using methods in Analyze usage in Log Analytics workspace to determine if there's additional configuration that can decrease your usage further. This is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service.
|Create an alert when data collection is high.
|To avoid unexpected bills, you should be proactively notified anytime you experience excessive usage. Notification allows you to address any potential anomalies before the end of your billing period.
|Consider a daily cap as a preventative measure to ensure that you don't exceed a particular budget.
|A daily cap disables data collection in a Log Analytics workspace for the rest of the day after your configured limit is reached. This shouldn't be used as a method to reduce costs as described in When to use a daily cap.
If you do set a daily cap, in addition to creating an alert when the cap is reached, ensure that you also create an alert rule to be notified when some percentage has been reached (90% for example). This gives you an opportunity to investigate and address the cause of the increased data before the cap shuts off data collection.
- Collect only critical resource log data from Azure resources.
|Collect only critical resource log data from Azure resources.
|When you create diagnostic settings to send resource logs for your Azure resources to a Log Analytics database, only specify those categories that you require. Since diagnostic settings don't allow granular filtering of resource logs, you can use a workspace transformation to further filter unneeded data for those resources that use a supported table. See Diagnostic settings in Azure Monitor for details on how to configure diagnostic settings and using transformations to filter their data.
- Activity log alerts, service health alerts, and resource health alerts are free of charge.
- When using log search alerts, minimize log search alert frequency.
- When using metric alerts, minimize the number of resources being monitored.
|Keep in mind that activity log alerts, service health alerts, and resource health alerts are free of charge.
|Azure Monitor activity alerts, service health alerts and resource health alerts are free. If what you want to monitor can be achieved with these alert types, use them.
|When using log search alerts, minimize log search alert frequency.
|When configuring log search alerts, keep in mind that the more frequent the rule evaluation, the higher the cost. Configure your rules accordingly.
|When using metric alerts, minimize the number of resources being monitored.
|Some resource types support metric alert rules that can monitor multiple resources of the same type. For these resource types, keep in mind that the rule can become expensive if the rule monitors many resources. To reduce costs, you can either reduce the scope of the metric alert rule or use log search alert rules, which are less expensive to monitor a large number of resources.
- Migrate from Log Analytics agent to Azure Monitor agent for granular data filtering.
- Filter data that you don't require from agents.
- Determine whether you'll use VM insights and what data to collect.
- Reduce polling frequency of performance counters.
- Ensure that VMs aren't sending duplicate data.
- Use Log Analytics workspace insights to analyze billable costs and identify cost saving opportunities.
- Migrate your SCOM environment to Azure Monitor SCOM Managed Instance.
|Migrate from Log Analytics agent to Azure Monitor agent for granular data filtering.
|If you still have VMs with the Log Analytics agent, migrate them to Azure Monitor agent so you can take advantage of better data filtering and use unique configurations with different sets of VMs. Configuration for data collection by the Log Analytics agent is done on the workspace, so all agents receive the same configuration. Data collection rules used by Azure Monitor agent can be tuned to the specific monitoring requirements of different sets of VMs. The Azure Monitor agent also allows you to use transformations to filter data being collected.
|Filter data that you don't require from agents.
|Reduce your data ingestion costs by filtering data that you don't use for alerting or analysis. See Monitor virtual machines with Azure Monitor: Collect data for guidance on data to collect for different monitoring scenarios and Control costs for specific guidance on filtering data to reduce your costs.
|Determine what data to collect with VM insights.
|VM insights is a great feature to quickly get started with monitoring your VMs and provides powerful features such as Map and performance trend views. If you don't use the Map feature or the data that it collects, then you should disable collection of processes and dependency data in your VM insights configuration to save on data ingestion costs.
|Reduce polling frequency of performance counters.
|If you're using a data collection rule to send performance data to your Log Analytics workspace, you can reduce their polling frequency to reduce the amount of data collected.
|Ensure that VMs aren't sending duplicate data.
|If you multi-home agents or you create similar data collection rules, make sure you're sending unique data to each workspace. See Analyze usage in Log Analytics workspace for guidance on analyzing your collected data to make sure you aren't collecting duplicate data. If you're migrating between agents, continue to use the Log Analytics agent until you migrate to the Azure Monitor agent rather than using both together unless you can ensure that each is collecting unique data.
|Use Log Analytics workspace insights to analyze billable costs and identify cost saving opportunities.
|Log Analytics workspace insights shows you the billable data collected in each table and from each VM. Use this information to identify your top machines and tables since they represent your best opportunity to reduce costs by filtering data. Use this insight and log queries in Analyze usage in Log Analytics workspace to further analyze the effects of configuration changes.
|Migrate your SCOM environment to Azure Monitor SCOM Managed Instance.
|Migrate your existing SCOM environment to Azure Monitor SCOM Managed Instance to support any management packs that can't be replaced by Azure Monitor. SCOM managed instance removes the requirement to maintain local management servers and database servers, reducing your overall cost to maintain your SCOM infrastructure.
- Don't enable Container insights collection of Prometheus metrics.
- Configure agent collection to modify data collection in Container insights.
- Modify settings for collection of metric data by Container insights.
- Disable Container insights collection of metric data if you don't use the Container insights experience in the Azure portal.
- If you don't query the container logs table regularly or use it for alerts, configure it as basic logs.
- Limit collection of resource logs you don't need.
- Use resource-specific logging for AKS resource logs and configure tables as basic logs.
- Use OpenCost to collect details about your Kubernetes costs.
|Don't enable Container insights collection of Prometheus metrics in Log Analytics workspace if you've enabled scraping of metrics with Prometheus.
|In addition to scraping Prometheus metrics from your cluster using Azure Monitor managed service for Prometheus, you can configure Container insights to collect Prometheus metrics in your Log Analytics workspace. This is redundant with the data in Managed Prometheus and will result in additional cost.
|Configure agent to modify data collection in Container insights.
|Analyze the data collected by Container insights as described in Controlling ingestion to reduce cost and adjust your configuration to stop collection of data you don't need.
|Modify settings for collection of metric data by Container insights.
|See Enable cost optimization settings for details on modifying both the frequency that metric data is collected and the namespaces that are collected by Container insights.
|Disable Container insights collection of metric data if you don't use the Container insights experience in the Azure portal.
|Container insights collects many of the same metric values as Managed Prometheus. You can disable collection of these metrics by configuring Container insights to only collect Logs and events as described in Enable cost optimization settings in Container insights. This configuration disables the Container insights experience in the Azure portal, but you can use Grafana to visualize Prometheus metrics and Log Analytics to analyze log data collected by Container insights.
|If you don't query the container logs table regularly or use it for alerts, configure it as basic logs.
|Convert your Container insights schema to ContainerLogV2 which is compatible with Basic logs and can provide significant cost savings as described in Controlling ingestion to reduce cost.
|Limit collection of resource logs you don't need.
|Control plane logs for AKS clusters are implemented as resource logs in Azure Monitor. Create a diagnostic setting to send this data to a Log Analytics workspace. See Collect control plane logs for AKS clusters for recommendations on which categories you should collect.
|Use resource-specific logging for AKS resource logs and configure tables as basic logs.
|AKS supports either Azure diagnostics mode or resource-specific mode for resource logs. Specify resource logs to enable the option to configure the tables for basic logs, which provide a reduced ingestion charge for logs that you only occasionally query and don't use for alerting.
|Use OpenCost to collect details about your Kubernetes costs.
|OpenCost is an open-source, vendor-neutral CNCF sandbox project for understanding your Kubernetes costs and supporting your ability to for AKS cost visibility. It exports detailed costing data in addition to customer-specific Azure pricing to Azure storage to assist the cluster administrator in analyzing and categorizing costs.
- Change to Workspace-based Application Insights.
- Use sampling to tune the amount of data collected.
- Limit the number of Ajax calls.
- Disable unneeded modules.
- Pre-aggregate metrics from any calls to TrackMetric.
- Limit the use of custom metrics.
- Ensure use of updated SDKs.
|Change to Workspace-based Application Insights
|Ensure that your Application Insights resources are Workspace-based so that they can leverage new cost savings tools such as Basic Logs, Commitment Tiers, Retention by data type and Data Archive.
|Use sampling to tune the amount of data collected.
|Sampling is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics.
|Limit the number of Ajax calls.
|Disable unneeded modules.
|Edit ApplicationInsights.config to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data aren't required.
|Pre-aggregate metrics from any calls to TrackMetric.
|If you put calls to TrackMetric in your application, you can reduce traffic by using the overload that accepts your calculation of the average and standard deviation of a batch of measurements. Alternatively, you can use a pre-aggregating package.
|Limit the use of custom metrics.
|The Application Insights option to Enable alerting on custom metric dimensions can increase costs. Using this option can result in the creation of more pre-aggregation metrics.
|Ensure use of updated SDKs.
|Earlier versions of the ASP.NET Core SDK and Worker Service SDK collect many counters by default, which were collected as custom metrics. Use later versions to specify only required counters.
Frequently asked questions
This section provides answers to common questions.
Is Application Insights free?
Yes, for experimental use. In the basic pricing plan, your application can send a certain allowance of data each month free of charge. The free allowance is large enough to cover development and publishing an app for a few users. You can set a cap to prevent more than a specified amount of data from being processed.
Larger volumes of telemetry are charged by the gigabyte. We provide some tips on how to limit your charges.
The Enterprise plan incurs a charge for each day that each web server node sends telemetry. It's suitable if you want to use Continuous Export on a large scale.
Read the pricing plan.
How much does Application Insights cost?
Open the Usage and estimated costs page in an Application Insights resource. There's a chart of recent usage. You can set a data volume cap, if you want.
To see your bills across all resources:
- Open the Azure portal.
- Search for Cost Management and use the Cost analysis pane to see forecasted costs.
- Search for Cost Management and Billing and open the Billing scopes pane to see current charges across subscriptions.
Are there data transfer charges between an Azure web app and Application Insights?
- If your Azure web app is hosted in a datacenter where there's an Application Insights collection endpoint, there's no charge.
- If there's no collection endpoint in your host datacenter, your app's telemetry incurs Azure outgoing charges.
This answer depends on the distribution of our endpoints, not on where your Application Insights resource is hosted.
Will I incur network costs if my Application Insights resource is monitoring an Azure resource (i.e., telemetry producer) in a different region?
Yes, you may incur additional network costs which will vary depending on the region the telemetry is coming from and where it is going. Refer to Azure bandwidth pricing for details.