Editja

Ixxerja permezz ta’


Cost optimization in Azure Monitor

Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. Before you use this article, you should see Azure Monitor cost and usage to understand the different ways that Azure Monitor charges and how to view your monthly bill.

This article describes Cost optimization for Azure Monitor as part of the Azure Well-Architected Framework. The Azure Well-Architected Framework is a set of guiding tenets that can be used to improve the quality of a workload. The framework consists of five pillars of architectural excellence:

  • Reliability
  • Security
  • Cost Optimization
  • Operational Excellence
  • Performance Efficiency

Azure Monitor Logs

Design checklist

  • Determine whether to combine your operational data and your security data in the same Log Analytics workspace.
  • Configure pricing tier for the amount of data that each Log Analytics workspace typically collects.
  • Configure data retention and archiving.
  • Configure tables used for debugging, troubleshooting, and auditing as Basic Logs.
  • Limit data collection from data sources for the workspace.
  • Regularly analyze collected data to identify trends and anomalies.
  • Create an alert when data collection is high.
  • Consider a daily cap as a preventative measure to ensure that you don't exceed a particular budget.
  • Set up alerts on Azure Advisor cost recommendations for Log Analytics workspaces.

Configuration recommendations

Recommendation Benefit
Determine whether to combine your operational data and your security data in the same Log Analytics workspace. Since all data in a Log Analytics workspace is subject to Microsoft Sentinel pricing if Sentinel is enabled, there might be cost implications to combining this data. See Design a Log Analytics workspace strategy for details on making this decision for your environment balancing it with criteria in other pillars.
Configure pricing tier for the amount of data that each Log Analytics workspace typically collects. By default, Log Analytics workspaces will use pay-as-you-go pricing with no minimum data volume. If you collect enough data, you can significantly decrease your cost by using a commitment tier, which allows you to commit to a daily minimum of data collected in exchange for a lower rate. If you collect enough data across workspaces in a single region, you can link them to a dedicated cluster and combine their collected volume using cluster pricing.

See Azure Monitor Logs cost calculations and options for details on commitment tiers and guidance on determining which is most appropriate for your level of usage. See Usage and estimated costs to view estimated costs for your usage at different pricing tiers.
Configure interactive and long-term data retention. There's a charge for retaining data in a Log Analytics workspace beyond the default of 31 days (90 days if Sentinel is enabled on the workspace and 90 days for Application insights data). Consider your particular requirements for having data readily available for log queries. You can significantly reduce your cost by configuring long-term retention, which allows you to retain data for up to twelve years and still access it occasionally using search jobs or restoring a set of data to the workspace.
Configure tables used for debugging, troubleshooting, and auditing as Basic Logs. Tables in a Log Analytics workspace configured for Basic Logs have a lower ingestion cost in exchange for limited features and a charge for log queries. If you query these tables infrequently and don't use them for alerting, this query cost can be more than offset by the reduced ingestion cost.
Limit data collection from data sources for the workspace. The primary factor for the cost of Azure Monitor is the amount of data that you collect in your Log Analytics workspace, so you should ensure that you collect no more data that you require to assess the health and performance of your services and applications. See Design a Log Analytics workspace architecture for details on making this decision for your environment balancing it with criteria in other pillars.

Tradeoff: There might be a tradeoff between cost and your monitoring requirements. For example, you might be able to detect a performance issue more quickly with a high sample rate, but you might want a lower sample rate to save costs. Most environments have multiple data sources with different types of collection, so you need to balance your particular requirements with your cost targets for each. See Cost optimization in Azure Monitor for recommendations on configuring collection for different data sources.
Regularly analyze collected data to identify trends and anomalies. Use Log Analytics workspace insights to periodically review the amount of data collected in your workspace. In addition to helping you understand the amount of data collected by different sources, it will identify anomalies and upward trends in data collection that could result in excess cost. Further analyze data collection using methods in Analyze usage in Log Analytics workspace to determine if there's additional configuration that can decrease your usage further. This is particularly important when you add a new set of data sources, such as a new set of virtual machines or onboard a new service.
Create an alert when data collection is high. To avoid unexpected bills, you should be proactively notified anytime you experience excessive usage. Notification allows you to address any potential anomalies before the end of your billing period.
Consider a daily cap as a preventative measure to ensure that you don't exceed a particular budget. A daily cap disables data collection in a Log Analytics workspace for the rest of the day after your configured limit is reached. This shouldn't be used as a method to reduce costs as described in When to use a daily cap.

If you do set a daily cap, in addition to creating an alert when the cap is reached, ensure that you also create an alert rule to be notified when some percentage has been reached (90% for example). This gives you an opportunity to investigate and address the cause of the increased data before the cap shuts off data collection.
Set up alerts on Azure Advisor cost recommendations for Log Analytics workspaces. Azure Advisor recommendations for Log Analytics workspaces proactively alert you when there's an opportunity to optimize your costs. Create Azure Advisor alerts for these cost recommendations:
  • Consider configuring the cost effective Basic logs plan on selected tables - We've identified ingestion of more than 1 GB per month to tables that are eligible for the low cost Basic log data plan. The Basic log plan gives you query capabilities for debugging and troubleshooting at a lower cost.
  • Consider changing pricing tier- Based on your current usage volume, investigate changing your pricing (Commitment) tier to receive a discount and reduce costs.
  • Consider removing unused restored tables - You have one or more tables with restored data active in your workspace. If you're no longer using a restored data, delete the table to avoid unnecessary charges.
  • Data ingestion anomaly was detected - We've identified a much higher ingestion rate over the past week, based on your ingestion in the three previous weeks. Take note of this change and the expected change in your costs.
You can also view automatically generated recommendation by selecting Overview > Recommendations or Advisor recommendations from your Log Analytics workspace resource menu.

Azure resources

Design checklist

  • Collect only critical resource log data from Azure resources.

Configuration recommendations

Recommendation Benefit
Collect only critical resource log data from Azure resources. When you create diagnostic settings to send resource logs for your Azure resources to a Log Analytics database, only specify those categories that you require. Since diagnostic settings don't allow granular filtering of resource logs, you can use a workspace transformation to filter unneeded data for those resources that use a supported table. See Diagnostic settings in Azure Monitor for details on how to configure diagnostic settings and using transformations to filter their data.

Alerts

Design checklist

  • Activity log alerts, service health alerts, and resource health alerts are free of charge.
  • When using log search alerts, minimize log search alert frequency.
  • When using metric alerts, minimize the number of resources being monitored.

Configuration recommendations

Recommendation Benefit
Keep in mind that activity log alerts, service health alerts, and resource health alerts are free of charge. Azure Monitor activity alerts, service health alerts and resource health alerts are free. If what you want to monitor can be achieved with these alert types, use them.
When using log search alerts, minimize log search alert frequency. When configuring log search alerts, keep in mind that the more frequent the rule evaluation, the higher the cost. Configure your rules accordingly.
When using metric alerts, minimize the number of resources being monitored. Some resource types support metric alert rules that can monitor multiple resources of the same type. For these resource types, keep in mind that the rule can become expensive if the rule monitors many resources. To reduce costs, you can either reduce the scope of the metric alert rule or use log search alert rules, which are less expensive to monitor a large number of resources.

Virtual machines

Design checklist

  • Migrate from Log Analytics agent to Azure Monitor agent for granular data filtering.
  • Filter data that you don't require from agents.
  • Determine whether you'll use VM insights and what data to collect.
  • Reduce polling frequency of performance counters.
  • Ensure that VMs aren't sending duplicate data.
  • Use Log Analytics workspace insights to analyze billable costs and identify cost saving opportunities.
  • Migrate your SCOM environment to Azure Monitor SCOM Managed Instance.

Configuration recommendations

Recommendation Description
Migrate from Log Analytics agent to Azure Monitor agent for granular data filtering. If you still have VMs with the Log Analytics agent, migrate them to Azure Monitor agent so you can take advantage of better data filtering and use unique configurations with different sets of VMs. Configuration for data collection by the Log Analytics agent is done on the workspace, so all agents receive the same configuration. Data collection rules used by Azure Monitor agent can be tuned to the specific monitoring requirements of different sets of VMs. The Azure Monitor agent also allows you to use transformations to filter data being collected.
Filter data that you don't require from agents. Reduce your data ingestion costs by filtering data that you don't use for alerting or analysis. See Monitor virtual machines with Azure Monitor: Collect data for guidance on data to collect for different monitoring scenarios and Control costs for specific guidance on filtering data to reduce your costs.
Determine what data to collect with VM insights. VM insights is a great feature to quickly get started with monitoring your VMs and provides powerful features such as Map and performance trend views. If you don't use the Map feature or the data that it collects, then you should disable collection of processes and dependency data in your VM insights configuration to save on data ingestion costs.
Reduce polling frequency of performance counters. If you're using a data collection rule to send performance data to your Log Analytics workspace, you can reduce their polling frequency to reduce the amount of data collected.
Ensure that VMs aren't sending duplicate data. If you multi-home agents or you create similar data collection rules, make sure you're sending unique data to each workspace. See Analyze usage in Log Analytics workspace for guidance on analyzing your collected data to make sure you aren't collecting duplicate data. If you're migrating between agents, continue to use the Log Analytics agent until you migrate to the Azure Monitor agent rather than using both together unless you can ensure that each is collecting unique data.
Use Log Analytics workspace insights to analyze billable costs and identify cost saving opportunities. Log Analytics workspace insights shows you the billable data collected in each table and from each VM. Use this information to identify your top machines and tables since they represent your best opportunity to reduce costs by filtering data. Use this insight and log queries in Analyze usage in Log Analytics workspace to further analyze the effects of configuration changes.
Migrate your SCOM environment to Azure Monitor SCOM Managed Instance. Migrate your existing SCOM environment to Azure Monitor SCOM Managed Instance to support any management packs that can't be replaced by Azure Monitor. SCOM managed instance removes the requirement to maintain local management servers and database servers, reducing your overall cost to maintain your SCOM infrastructure.

Containers

Design checklist

  • Enable collection of metrics through the Azure Monitor managed service for Prometheus.
  • Configure agent collection to modify data collection in Container insights.
  • Modify settings for collection of metric data by Container insights.
  • Disable Container insights collection of metric data if you don't use the Container insights experience in the Azure portal.
  • If you don't query the container logs table regularly or use it for alerts, configure it as basic logs.
  • Limit collection of resource logs you don't need.
  • Use resource-specific logging for AKS resource logs and configure tables as basic logs.
  • Use OpenCost to collect details about your Kubernetes costs.

Configuration recommendations

Recommendation Benefit
Enable collection of metrics through the Azure Monitor managed service for Prometheus. Be sure you do not also send Prometheus metrics to a Log Analytics Workspace. You can use Azure Monitor managed service for Prometheus for scraping Prometheus metrics from your cluster by enabling Managed Prometheus. Note that you can configure Container insights to collect Prometheus metrics in your Log Analytics workspace, however this is not recommended as this is redundant with the data in Managed Prometheus and will result in additional cost. For details, see Managed Prometheus pricing.
Configure agent to modify data collection in Container insights. Analyze the data collected by Container insights as described in Optimize monitoring costs for Container insights and adjust your configuration to stop collection of data you don't need.
Modify settings for collection of metric data by Container insights. See Enable cost optimization settings for details on modifying both the frequency that metric data is collected and the namespaces that are collected by Container insights.
Disable Container insights collection of metric data if you don't use the Container insights experience in the Azure portal. Container insights collects many of the same metric values as Managed Prometheus. You can disable collection of these metrics by configuring Container insights to only collect Logs and events as described in Enable cost optimization settings in Container insights. This configuration disables the Container insights experience in the Azure portal, but you can use Grafana to visualize Prometheus metrics and Log Analytics to analyze log data collected by Container insights.
If you don't query the container logs table regularly or use it for alerts, configure it as basic logs. Convert your Container insights schema to ContainerLogV2 which is compatible with Basic logs and can provide significant cost savings as described in Optimize monitoring costs for Container insights.
Limit collection of resource logs you don't need. Control plane logs for AKS clusters are implemented as resource logs in Azure Monitor. Create a diagnostic setting to send this data to a Log Analytics workspace. See Collect control plane logs for AKS clusters for recommendations on which categories you should collect.
Use resource-specific logging for AKS resource logs and configure tables as basic logs. AKS supports either Azure diagnostics mode or resource-specific mode for resource logs. Specify resource logs to enable the option to configure the tables for basic logs, which provide a reduced ingestion charge for logs that you only occasionally query and don't use for alerting.
Use OpenCost to collect details about your Kubernetes costs. OpenCost is an open-source, vendor-neutral CNCF sandbox project for understanding your Kubernetes costs and supporting your ability to for AKS cost visibility. It exports detailed costing data in addition to customer-specific Azure pricing to Azure storage to assist the cluster administrator in analyzing and categorizing costs.

Application Insights

Design checklist

  • Change to workspace-based Application Insights.
  • Use sampling to tune the amount of data collected.
  • Limit the number of Ajax calls.
  • Disable unneeded modules.
  • Preaggregate metrics from any calls to TrackMetric.
  • Limit the use of custom metrics where possible.
  • Ensure use of updated software development kits (SDKs).
  • Limit unwanted host trace and general trace logging using log levels.

Configuration recommendations

Recommendation Benefit
Change to workspace-based Application Insights. Ensure that your Application Insights resources are workspace-based. Workspace-based Application Insights resources can apply new cost savings tools such as Basic Logs, commitment tiers, retention by data type, and long-term retention.
Use sampling to tune the amount of data collected. Sampling is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry sent from your applications with minimal distortion of metrics.
Limit the number of Ajax calls. Limit the number of Ajax calls that can be reported in every page view or disable Ajax reporting. If you disable Ajax calls, you also disable JavaScript correlation.
Disable unneeded modules. Edit ApplicationInsights.config to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data aren't required.
Preaggregate metrics from any calls to TrackMetric. If you put calls to TrackMetric in your application, you can reduce traffic by using the overload that accepts your calculation of the average and standard deviation of a batch of measurements. Alternatively, you can use a preaggregating package.
Limit the use of custom metrics. The Application Insights option to Enable alerting on custom metric dimensions can increase costs. Using this option can result in the creation of more preaggregation metrics.
Ensure use of updated software development kits (SDKs). Earlier versions of the ASP.NET Core SDK and Worker Service SDK collect many counters by default, which were collected as custom metrics. Use later versions to specify only required counters.
Limit unwanted trace logging. Application Insights has several possible log sources. Log levels can be used to tune and reduce trace log telemetry. Logging can also apply to the host. For example, customers using Azure Kubernetes Service (AKS) should adjust control plane and data plane logs. Similarly, customers using Azure functions should adapt log levels and scope to optimize log volume and costs.

Next step