Events
Power BI DataViz World Championships
Feb 14, 4 PM - Mar 31, 4 PM
With 4 chances to enter, you could win a conference package and make it to the LIVE Grand Finale in Las Vegas
Learn moreThis browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Note
This article forms part of the Power BI implementation planning series of articles. This series focuses primarily on the Power BI experience within Microsoft Fabric. For an introduction to the series, see Power BI implementation planning.
This data-level auditing article is targeted at multiple audiences:
Important
At times this article refers to Power BI Premium or its capacity subscriptions (P SKUs). Be aware that Microsoft is currently consolidating purchase options and retiring the Power BI Premium per capacity SKUs. New and existing customers should consider purchasing Fabric capacity subscriptions (F SKUs) instead.
For more information, see Important update coming to Power BI Premium licensing and Power BI Premium FAQ.
The concepts covered in this article apply primarily to solutions created for three content delivery scopes, specifically enterprise BI, departmental BI, and team BI. Creators of personal BI solutions might find the information in this article useful as well; however, they're not the primary target.
Achieving good performance in reports and visuals isn't possible when the underlying semantic model and/or data source isn't performing well. This article focuses on auditing and monitoring of semantic models, dataflows, and datamarts. It's the second article in the auditing and monitoring series because the tools and techniques are more complex than what's described in the Report-level auditing article. Ideally, you create shared semantic models (intended for reuse among many reports) before users create reports. Therefore, we recommend that you read this article together with the Report-level auditing article.
Because Power BI semantic models are built upon the Analysis Services tabular engine, you can connect to a local data model (in Power BI Desktop) or a Premium semantic model (in the Power BI service) as if it's an Analysis Services database. Therefore, many of the auditing and monitoring capabilities of Analysis Services are supported for Power BI Premium semantic models.
Note
For more information about models hosted in Analysis Services, see Monitoring overview.
The remainder of this article primarily focuses on models published to the Power BI service.
Over time, data creators and owners might experience situations with their semantic models. A semantic model can:
To ensure usability, good performance, and adoption of the content they create, you should audit the usage and performance of the data assets you're responsible for managing. You can use the dataset event logs, which capture user-generated and system-generated activities that occur for a semantic model. They're also referred to as trace events, dataset logs, or dataset activity logs. System administrators often call them low-level trace events because they're detailed.
Note
The dataset name change has been rolled out in the Power BI service and in documentation, though there might be some instances—like with event log operation names—where the change hasn't occurred yet.
You should analyze semantic model trace events to:
The events generated by a Power BI semantic model are derived from existing diagnostic logs available for Azure Analysis Services. There are many types of trace events that you can capture and analyze, which are described in the following sections.
Azure Log Analytics is a component of the Azure Monitor service. Azure Log Analytics integration with Power BI allows you to capture semantic model events from all semantic models in a Power BI workspace. It's supported only for Premium workspaces. After you set up integration and the connection is enabled (for a Power BI Premium workspace), semantic model events are automatically captured and continually sent to an Azure Log Analytics workspace. The semantic model logs are stored in Azure Data Explorer, which is an append-only database that's optimized for capturing high-volume, near-real time telemetry data.
You assign a Power BI Premium workspace to a Log Analytics workspace in Azure. You must create a new Log Analytics resource in your Azure subscription to enable this type of logging.
Logs from one or more Power BI workspaces will be sent to a target Log Analytics workspace. Here are some ways you can choose to organize the data.
Tip
Thoroughly review the documentation and frequently asked questions on this functionality so that you're clear on what's possible and that you understand the technical requirements. Before making this functionality broadly available to workspace administrators in your organization, consider doing a technical proof of concept (POC) with one Power BI workspace.
Important
Although the names are similar, the data captured by Azure Log Analytics isn't the same as the Power BI activity log. Azure Log Analytics captures detail-level trace events from the Analysis Services engine. Its sole purpose is to help you analyze and troubleshoot semantic model performance. Its scope is at the workspace level. Conversely, the purpose of the activity log is to help you understand how often certain user activities occur (such as editing a report, refreshing a semantic model, or creating an app). Its scope is the entire Power BI tenant.
For more information about the user activities you can audit for your Power BI tenant, see Tenant-level auditing.
The Azure Log Analytics connection for workspace administrators tenant setting controls which groups of users (who also have the necessary workspace admin role) can connect a Power BI workspace to an existing Azure Log Analytics workspace.
Before you can set up integration, you must meet security prerequisites. Therefore, consider enabling the Power BI tenant setting only for Power BI workspace administrators who also have the required permissions in Azure Log Analytics, or who can obtain those permissions upon request.
Tip
Collaborate with your Azure administrator early in the planning process, especially when getting approval to create a new Azure resource is a challenge in your organization. You'll also need to plan for the security prerequisites. Decide whether to grant permission to your Power BI workspace administrator in Azure, or whether to grant permission to the Azure administrator in Power BI.
The semantic model logs captured by Azure Log Analytics include the semantic model queries, query statistics, detailed refresh activity, CPU time consumed on Premium capacities, and more. Because they're detail-level logs from the Analysis Services engine, the data can be verbose. Large data volumes are common for large workspaces that experience high semantic model activity.
To optimize cost when using Azure Log Analytics with Power BI:
There are several ways to access the events that are sent to Azure Log Analytics. You can use:
Tip
Because there's a high volume of semantic model trace events, we recommend that you develop a DirectQuery model to analyze the data. A DirectQuery model allows you to query the data in near-real time. The events usually arrive within five minutes.
For more information, see Govern Azure connections.
Checklist - When planning to use Azure Log Analytics, key decisions and actions include:
You can use SQL Server Profiler (SQL Profiler) to capture Power BI semantic model events. It's a component of SQL Server Management Studio (SSMS). Connectivity to a Power BI semantic model is supported with SSMS because it's based on the Analysis Services architecture that originated in SQL Server.
You can use SQL Profiler during different stages of the lifecycle of a semantic model.
It's also possible to use SQL Profiler as an external tool within DAX Studio. You can use DAX Studio to start a profiler trace, parse the data, and format the results. Data modelers who use DAX Studio often prefer this approach versus using SQL Profiler directly.
Note
Using SQL Profiler is a different use case to the activity of profiling data. You profile data in the Power Query Editor to gain a deeper understanding of its characteristics. While data profiling is an important activity for data modelers, it's not in scope for this article.
Consider using SQL Profiler instead of Azure Log Analytics when:
Like Azure Log Analytics (described earlier in this article), semantic model events captured by SQL Profiler are derived from existing diagnostic logs available for Azure Analysis Services. However, there are some differences in the events that are available.
Tip
The use of SQL Profiler for monitoring Analysis Services is covered in many books, articles, and blog posts. Most of that information is relevant for monitoring a Power BI semantic model.
Important
You can also use SQL Profiler to monitor queries sent from the Power BI service to the underlying data sources (for example, to a SQL Server relational database). However, the capability to trace a relational database is deprecated. Connecting to the Analysis Services engine is supported and not deprecated. If you're familiar with Analysis Services extended events and you prefer to use them, connectivity from SSMS is possible for a data model in Power BI Desktop. However, it's not supported for Power BI Premium. Therefore, this section focuses only on standard SQL Profiler connectivity.
The Allow XMLA endpoints and Analyze in Excel with on-premises semantic models tenant setting controls which groups of users (who are also assigned to the Contributor, Member, or Admin workspace role, or the Build permission for the individual semantic model) can use the XMLA endpoint to query and/or maintain semantic models in the Power BI service. For more information about using the XMLA endpoint, see the advanced data model management usage scenario.
Note
You can also use SQL Profiler to help debug and troubleshoot specific DAX expressions. You can connect SQL Profiler to Power BI Desktop as an external tool. Look for the DAX Evaluation Log event class to view intermediary results of a DAX expression. That event is generated when you use the EVALUATEANDLOG DAX function in a model calculation.
This function is only intended for development and test purposes. You should remove it from your data model calculations before publishing the data model to a production workspace.
Checklist - When planning to use SQL Profiler, key decisions and actions include:
Because Power BI semantic models are built upon the Analysis Services engine, you have access to the tools that can query the metadata of a data model. Metadata includes everything about the data model, including table names, column names, and measure expressions.
The Analysis Services Dynamic Management Views (DMVs) can query the data model metadata. You can use the DMVs to audit, document, and optimize your data models at a point in time.
Specifically, you can:
Tip
The DMVs retrieve information about the current state of a semantic model. Think of the data returned by DMVs as a snapshot of what's occurring at a point in time. Conversely, the semantic model event logs (described earlier in this article) retrieve information about what activities occurred for a semantic model while a trace connection was active.
SSMS is a tool commonly used to run DMV queries. You can also use the Invoke-ASCmd PowerShell cmdlet to create and execute XMLA scripts that query the DMVs.
Third-party tools and external tools are also popular with the Power BI community. These tools use the publicly documented DMVs to simplify access and to work with data returned by the DMVs. One example is DAX Studio, which includes explicit functionality to access the DMVs. DAX Studio also includes a built-in View Metrics feature, which is commonly known as Vertipaq Analyzer. Vertipaq Analyzer has a user interface for analyzing the structure and size of tables, columns, relationships, and partitions in a data model. You can also export (or import) the data model metadata to a .vpax file. The exported file only contains metadata about the data model structure and size, without storing any model data.
Tip
Consider sharing a .vpax file with someone when you need assistance with a data model. That way, you won't share the model data with that person.
You can use DMV queries during different stages of the lifecycle of a semantic model.
Tip
If you decide to write your own DMV queries (for example, in SSMS), be aware that the DMVs don't support all SQL operations. Also, some DMVs aren't supported in Power BI (because they require Analysis Services server administrator permissions that aren't supported by Power BI).
The Allow XMLA endpoints and Analyze in Excel with on-premises semantic models tenant setting controls which groups of users (who are also assigned to the Contributor, Member, or Admin workspace role, or the Build permission for the individual semantic model) can use the XMLA endpoint to query and/or maintain semantic models in the Power BI service.
For more information about using the XMLA endpoint, third-party tools, and external tools, see the advanced data model management usage scenario.
Best Practice Analyzer (BPA) is a feature of Tabular Editor, which is a third-party tool that's achieved widespread adoption by the Power BI community. BPA includes a set of customizable rules that can help you audit the quality, consistency, and performance of your data model.
Tip
To set up BPA, download the set of best practice rules, which are provided by Microsoft on GitHub.
Primarily, BPA can help you improve consistency of models by detecting suboptimal design decisions that can reduce performance issues. It's helpful when you have self-service data modelers distributed throughout different areas of the organization.
BPA can also help you audit and govern your data models. For example, you can verify whether a data model includes any row-level security (RLS) roles. Or, you can validate whether all model objects have a description. That's helpful when, for example, your goal is to ensure that a data model includes a data dictionary.
BPA can expose design issues that can help the Center of Excellence determine whether more training or documentation is necessary. It can take action to educate data creators on best practices and organizational guidelines.
Tip
Keep in mind that BPA can detect the existence of a characteristic (such as row-level security). However, it might be difficult to determine whether it's set up correctly. For that reason, a subject matter expert might need to conduct a review . Conversely, the non-existence of a particular characteristic doesn't necessarily mean a bad design; the data modeler might have a good reason for producing a particular design.
Checklist - When planning to access metadata for data models, key decisions and actions include:
Power BI Desktop includes several tools that help data creators troubleshoot and investigate their data models. These capabilities are targeted at data modelers who want to validate their data model, and do performance tuning before publishing to the Power BI service.
Use Performance Analyzer, which is available in Power BI Desktop, to audit and investigate performance of a data model. Performance Analyzer helps report creators measure the performance of individual report elements. Commonly, however, the root cause of performance issues is related to data model design. For this reason, a semantic model creator can benefit from using Performance Analyzer too. If there are different content creators responsible for creating reports versus semantic models, it's likely that they'll need to collaborate when troubleshooting a performance issue.
Tip
You can use DAX Studio to import and analyze the log files generated by Performance Analyzer.
For more information about Performance Analyzer, see Report-level auditing.
Use Query Diagnostics, which are available in Power BI Desktop, to investigate the performance of Power Query. They're useful for troubleshooting, and for when you need to understand what the Power Query engine is doing.
The information you can gain from Query Diagnostics includes:
Depending on what you're looking for, you can enable one or all the logs: aggregated, detailed, performance counters, and data privacy partitions.
You can start session diagnostics in Power Query Editor. Once enabled, query and refresh operations are collected until diagnostic tracing is stopped. The data is populated directly in the query editor as soon as the diagnostics are stopped. Power Query creates a Diagnostics group (folder), and adds several queries to it. You can then use standard Power Query functionality to view and analyze the diagnostics data.
Alternatively, you can enable a trace in Power BI Desktop in the Diagnostics section of the Options window. Log files are saved to a folder on your local machine. These log files are populated with the data after you close Power BI Desktop, at which time the trace is stopped. Once Power BI Desktop is closed, you can open the log files with your preferred program (such as a text editor) to view them.
Power Query supports various capabilities to help you understand query evaluation, including the query plan. It can also help you determine whether query folding is occurring for an entire query, or for a subset of steps in a query. Query folding is one of the most important aspects of performance tuning. It's also helpful to review the native queries sent by Power Query when you're monitoring a data source, which is described later in this article.
When troubleshooting, it can be helpful to collaborate with your Power BI Premium capacity administrator. The capacity administrator has access to the Power BI Premium utilization and metrics app. This app can provide you with a wealth of information about activities that occur in the capacity. That information can help you troubleshoot semantic model issues.
Tip
Your Premium capacity administrator can grant access to additional users (non-capacity administrators) to allow them to access the Premium metrics app.
The Premium metrics app comprises an internal semantic model and an initial set of reports. It helps you perform near-real-time monitoring of a Power BI Premium capacity (P SKU) or Power BI Embedded (A SKU) capacity. It includes data for the last two to four weeks (depending on the metric).
Use the Premium metrics app to troubleshoot and optimize semantic models. For example, you can identify semantic models that have a large memory footprint or that experience routinely high CPU usage. It's also a useful tool to find semantic models that are approaching the limit of your capacity size.
Checklist - When considering approaches to use for monitoring data model and query performance, key decisions and actions include:
Sometimes it's necessary to directly monitor a specific data source that Power BI connects to. For example, you might have a data warehouse that's experiencing an increased workload, and users are reporting performance degradation. Typically, a database administrator or system administrator monitors data sources.
You can monitor a data source to:
There are many actions that a Power BI content creator might take once they analyze monitoring results. They could:
System administrators might take other actions. They could:
The tools and techniques that you can use to monitor data sources depend on the technology platform. For example, your database administrator can use extended events or the Query Store for monitoring Azure SQL Database and SQL Server databases.
Sometimes, Power BI accesses a data source through a data gateway. Gateways handle connectivity from the Power BI service to certain types of data sources. However, they do more than just connect to data. A gateway includes a mashup engine that performs processing and data transformations on the machine. It also compresses and encrypts the data so that it can be efficiently and securely transmitted to the Power BI service. Therefore, an unmanaged, or non-optimized, gateway can contribute to performance bottlenecks. We recommend that you talk to your gateway administrator for help with monitoring gateways.
Tip
Your Power BI administrator can compile a full tenant inventory (which includes lineage) and access user activities in the activity log. By correlating the lineage and user activities, administrators can identify the most frequently used data sources and gateways.
For more information about the tenant inventory and the activity log, see Tenant-level auditing.
Checklist - When planning to monitor a data source, key decisions and actions include:
A data refresh operation involves importing data from underlying data source(s) into a Power BI semantic model, dataflow, or datamart. You can schedule a data refresh operation or run it on-demand.
IT commonly uses service-level agreements (SLAs) to document the expectations for data assets. For Power BI, consider using an SLA for critical content or enterprise-level content. It commonly includes when users can expect updated data in a semantic model to be available. For example, you could have an SLA that all data refreshes must complete by 7am every day.
The semantic model event logs from Azure Log Analytics or SQL Profiler (described previously in this article) include detailed information about what's happening in a semantic model. The captured events include semantic model refresh activity. The event logs are especially useful when you need to troubleshoot and investigate semantic model refreshes.
When you have content that's hosted in a Power BI Premium capacity, you have more capabilities to monitor data refresh operations.
Content creators can initiate semantic model refreshes programmatically by using enhanced refresh with the Refresh Dataset in Group Power BI REST API. When you use enhanced refresh, you can monitor the historical, current, and pending refresh operations.
Power BI administrators can monitor data refresh schedules in the tenant to determine whether there are many refresh operations scheduled concurrently during a specific timeframe (for example, between 5am and 7am, which could be a particularly busy data refresh time). Administrators have permission to access the semantic model refresh schedule metadata from the metadata scanning APIs, which are known as the scanner APIs.
For critical semantic models, don't rely solely on email notifications for monitoring data refresh issues. Consider compiling the data refresh history in a centralized store where you can monitor, analyze, and act upon it.
You can retrieve data refresh history by using:
Tip
We strongly recommend that you monitor the refresh history of your semantic models to ensure that current data is available to reports and dashboards. It also helps you to know whether SLAs are being met.
Checklist - When planning for data refresh monitoring, key decisions and actions include:
You create a Power BI dataflow with Power Query Online. Many of the query performance features, and the Power Query diagnostics, which were described earlier, are applicable.
Optionally, you can set workspaces to use Azure Data Lake Storage Gen2 for dataflow storage (known as bring-your-own-storage) rather than internal storage. When you use bring-your-own-storage, consider enabling telemetry so that you can monitor metrics for the storage account. For more information, see the self-service data preparation usage scenario, and the advanced data preparation usage scenario.
You can use the Power BI REST APIs to monitor dataflow transactions. For example, use the Get Dataflow Transactions API to check the status of dataflow refreshes.
You can track user activities for Power BI dataflows with the Power BI activity log. For more information, see Tenant-level auditing.
Tip
There are many best practices that you can adopt to optimize your dataflow designs. For more information, see Dataflows best practices.
A Power BI datamart includes several integrated components, including a dataflow, a managed database, and a semantic model. Refer to the previous sections of this article to learn about auditing and monitoring of each component.
You can track user activities for Power BI datamarts by using the Power BI activity log. For more information, see Tenant-level auditing.
In the next article in this series, learn about tenant-level auditing.
Events
Power BI DataViz World Championships
Feb 14, 4 PM - Mar 31, 4 PM
With 4 chances to enter, you could win a conference package and make it to the LIVE Grand Finale in Las Vegas
Learn moreTraining
Learning path
Use advance techniques in canvas apps to perform custom updates and optimization - Training
Use advance techniques in canvas apps to perform custom updates and optimization
Certification
Microsoft Certified: Power BI Data Analyst Associate - Certifications
Demonstrate methods and best practices that align with business and technical requirements for modeling, visualizing, and analyzing data with Microsoft Power BI.
Documentation
Power BI implementation planning: Report-level auditing - Power BI
Learn about report-level auditing planning for Power BI.
Power BI implementation planning: Auditing and monitoring - Power BI
An introduction to the Power BI auditing and monitoring planning articles.
Power BI implementation planning: Tenant-level monitoring - Power BI
Learn about tenant-level monitoring planning for Power BI.