Share via

Dataset Refresh Taking 3-4 Hours - Performance Degradation

Abhishek Tiwari 0 Reputation points
2026-03-18T10:31:58.0833333+00:00

Hi Team,

We are experiencing significant delay in dataset refresh in Power BI Service.

Earlier refresh duration was normal, but from last few days it is taking around 3-4 hours to complete.

Impact:

  • Business reporting is delayed
  • Dashboards are not updated on time

Request you to investigate if there is any backend or capacity-related issue.

Thanks.

Windows for business
Windows for business

A category covering Microsoft's enterprise and professional Windows solutions

0 comments No comments

4 answers

Sort by: Most helpful
  1. Yutaka_K_JP 1,650 Reputation points
    2026-03-24T05:48:39.2533333+00:00

    I think that for a jump like this you can check the “merge wait” in XMLA;

    if it stretches, it’s capacity pressure, and shifting the refresh 5–10 minutes plus simplifying one heavy Power Query step usually resolves it.

    0 comments No comments

  2. VPHAN 28,900 Reputation points Independent Advisor
    2026-03-22T07:05:04.79+00:00

    Hi Abhishek Tiwari,

    How is your issue going? Has it been resolved yet? If it has, please consider accepting the answer as it helps others sharing the same problem benefit too. Thank you :)

    VP

    0 comments No comments

  3. VPHAN 28,900 Reputation points Independent Advisor
    2026-03-18T11:11:46.2366667+00:00

    Hi Abhishek Tiwari,

    The sudden increase in your Power BI dataset refresh duration to several hours strongly suggests a resource bottleneck rather than a structural issue. Since the performance degraded recently without major model changes, we first need to determine if your Power BI Premium or Fabric capacity is experiencing compute exhaustion. I recommend accessing the Fabric Capacity Metrics app within your tenant to review CPU and memory utilization specifically during your scheduled refresh windows. If the capacity is routinely exceeding its allocated limits, the Microsoft backend will intentionally queue and throttle refresh operations to protect overall tenant stability. Distributing your scheduled refreshes across non-peak hours can immediately alleviate this queuing behavior and restore faster processing.

    If your capacity metrics appear normal and throttling is not occurring, the next step is to isolate specific dataset bottlenecks using the workspace XMLA endpoint. By connecting to the workspace via SQL Server Management Studio, you can run a SQL Server Profiler trace against the Analysis Services engine while a manual refresh is active. This allows you to monitor internal processing steps in real-time to see precisely which table or transformation is hanging. Often, sudden degradation is caused by a backend data source change that breaks query folding, forcing the service to pull massive amounts of raw data into memory to perform transformations locally instead of efficiently pushing the processing down to the source database.

    If your dataset relies on on-premises data sources, the latency might be occurring at the network boundary. You should examine the server hosting the on-premises data gateway for CPU and memory exhaustion. Reviewing the gateway diagnostic logs, specifically the files located at %localappdata%\Microsoft\On-premises data gateway\Gateway.log, will help determine if a saturated network connection or overloaded server is choking data throughput to the cloud. You can also monitor the QueryExecutionReport.json file in that same directory if performance logging is enabled to track the exact execution time of the queries being sent to your local servers.

    Hope this answer brought you some useful information. If it did, please hit “accept answer”. Should you have any questions, feel free to leave a comment.

    VP

    0 comments No comments

  4. Q&A Assist
    2026-03-18T10:32:24.2533333+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    Use the following steps to investigate and mitigate the slowdown in dataset refresh in Power BI Service:

    1. Check refresh history for errors, retries, and timing changes
      • In the workspace, open the semantic model (dataset) settings and review Refresh history.
      • Look for:
        • Increased duration in the Data phase compared to previous days.
        • Multiple automatic retry attempts before a success (indicates transient issues or throttling).
        • Failures or warnings in either Data or Query Cache phases.
      • Use the Show link on failed or long-running entries to see detailed error messages and where the time is spent.
    2. Verify capacity limits and possible throttling (Premium/PPU/Fabric)
      • If the dataset is on Premium/PPU/Fabric capacity, check whether many datasets are refreshing at the same time.
      • Use the refresh schedule view to see if refreshes are clustered in a peak window. If so, spread refresh times to non-peak hours.
      • Throttling symptoms:
        • Long queue times before refresh actually starts.
        • Errors like “You’ve exceeded the capacity limit for semantic model refreshes. Try again when fewer semantic models are being processed.”
      • If throttling is suspected, apply best practices:
        • Refresh during non-peak times.
        • Reduce model complexity (especially calculated tables/columns).
        • Use incremental refresh for large models.
    3. Confirm refresh limits and duration vs. capacity
      • On shared capacity:
        • Maximum of 8 scheduled refreshes per day.
        • Each refresh must complete within 2 hours; if the model is approaching this limit, performance issues or failures can appear.
      • On Premium/PPU/Fabric capacity:
        • Up to 48 scheduled refreshes per day.
        • Maximum refresh duration is 5 hours (longer only if using XMLA-based refresh).
      • If refresh duration has grown close to these limits, consider:
        • Moving the dataset to Premium if currently on shared capacity.
        • Enabling incremental refresh for large tables.
        • Reducing dataset size and complexity.
    4. Check for OneDrive/SharePoint-based models and sync timing
      • If the dataset is sourced from a PBIX/Excel/CSV on OneDrive or SharePoint Online:
        • Power BI performs a OneDrive refresh (file sync) about every hour.
        • Ensure scheduled data refresh is configured to run after the OneDrive sync window (for example, schedule dataset refresh at least 60–90 minutes after the external process updates the file).
      • If the source file was replaced (copy over) instead of updated, OneDrive item ID may have changed, causing sync or refresh issues. In that case, reconfigure the connection to the correct file.
    5. Evaluate query cache and dashboard load
      • On Premium capacity, query caching can increase the time after data load because cache must be rebuilt after each refresh.
      • Heavy dashboard usage and many tiles can add extra time in the Query Cache phase.
      • To reduce end-to-end refresh time:
        • Reduce the number of dashboards/tiles hitting the dataset.
        • Consider disabling automatic cache refresh if the dataset refreshes very frequently.
    6. Optimize the dataset and data source
      • Remove unused tables and columns.
      • Avoid expensive DAX calculations and dynamic data source definitions.
      • Move complex calculations to the data source or ETL where possible.
      • For very frequently changing or very large data, consider DirectQuery or aggregations if acceptable for performance and limitations.
    7. Ongoing monitoring and alerts
      • Regularly review refresh history to detect trends in duration increases.
      • Ensure refresh failure notifications are enabled for the dataset owner and additional contacts so issues are caught early.
      • If refresh has been paused due to inactivity, resume by opening a report/dashboard or triggering Refresh now.

    If, after these checks, refresh duration remains abnormally high without clear cause in history or capacity usage, collect refresh history details and capacity information and escalate through your Power BI admin or support channel for backend investigation.


    References:

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.