Memory spikes in Analysis Services
We've recently been experiencing large memory spikes while processing a few of our tabular models on different Azure Analysis Services instances.
There were no major changes to the data, so we were not able to determine why it was happening.
The issue then went away on 01/26 after a week of doing so. This coincided with a maintenance window on Analysis Services in the West US region (the region we use).
Anyone else experience similar issue and/or know if the maintenance was what actually resolved this or just coincidental timing?
Azure Analysis Services
-
phemanth • 15,765 Reputation points • Microsoft External Staff • Moderator
2025-01-30T04:41:23.1333333+00:00 Thanks for the question and using MS Q&A platform.
It Seems like you've been dealing with some frustrating memory spikes in Azure Analysis Services. It's good to hear that the issue resolved itself after the maintenance window, but I understand why you'd want to know if the maintenance was the actual fix or just a coincidence.
Memory spikes in Azure Analysis Services can be caused by various factors, including complex DAX calculations, large data volumes, or inefficient data models
Maintenance windows can sometimes resolve these issues if they involve updates or optimizations to the underlying infrastructure.
Here are steps you can take to monitor and optimize memory usage in Azure Analysis Services:
- Monitor Memory Usage: Use the Azure portal to monitor memory usage and identify any patterns or specific times when spikes occur.
- Optimize Data Models: Simplify your data models and reduce the number of calculated columns and tables if possible.
- Review DAX Queries: Ensure that your DAX queries are optimized and not causing excessive memory consumption.
- Scale Up Resources: If memory issues persist, consider scaling up your Azure Analysis Services instance to a higher tier with more memory
I hope the above steps will resolve the issue, please do let us know if issue persists. Thank you
-
phemanth • 15,765 Reputation points • Microsoft External Staff • Moderator
2025-01-31T01:29:57.5466667+00:00 @Billy Duong We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet. In case if you have any resolution please do share that same with the community as it can be helpful to others. Otherwise, will respond with more details and we will try to help.
-
Billy Duong • 10 Reputation points
2025-01-31T18:42:58.5333333+00:00 Thanks for the response. No resolution has been found. We've previously done all the steps you suggested while it was occurring and it didn't help - other than scaling up resources, of course. But that wasn't really necessary once the spike went away.
One of the affected models was fairly simple - not many objects/tables/calculations in it. Not really any DAX queries in it. But the spike was still occurring on it.
-
Nahuel Beni • 0 Reputation points
2025-02-01T19:26:47.9433333+00:00 Hi Billy, we had a similar issue with a system last week. Eventually it was solved, but we are now seeing it again.
Last week we "workaround" it by moving to another Azure region but today that's not the case. Are you back to facing the issue?
-
phemanth • 15,765 Reputation points • Microsoft External Staff • Moderator
2025-02-03T12:28:58.9566667+00:00 Thanks for information
Given that you've already tried the common optimization steps, it please consider few additional approaches:
- Check for Background Processes: Sometimes, background processes or other services running on the same infrastructure can cause unexpected memory spikes. Ensure there are no other processes that might be interfering.
- Review Maintenance Logs: Look into the maintenance logs around the time the issue resolved itself. There might be clues indicating what changes were made during the maintenance window.
- Analyze Query Patterns: Even if there aren't many DAX queries, it's worth analyzing any query patterns or user interactions that might coincide with the memory spikes.
I hope the above steps will resolve the issue, please do let us know if issue persists. Thank you
-
Remy Kamphuis | Victa • 5 Reputation points
2025-05-20T14:08:02.9666667+00:00 Hi,
We are currently facing similar issues since an update in Azure Analysis Services last night. The last weeks we have 4-hourly reloads scheduled that are running without any errors. Our S1-tier can hold up to 25 GB in memory, whereas we have only needed max 22.2 GB in this period. However, after the failed reload last night due to the update/restart, we see spikes up to and over 31 GB. Of course, the reloads fail because of that.
I have added a screenshot for reference. The amount of data has not increased significantly nor has the datamodel been altered. No additional (DAX-)measures were added. Basically, nothing changed except the update. In the normal scheduled reloads we load 2 tabular endpoints simultaneously whereas the spikes below already occur when I reload only one endpoint.
I can't explain the behaviour and it's driving me crazy.
-
Dennis van Geffen • 0 Reputation points
2025-05-20T19:48:17.9733333+00:00 We have the same problem. Processing suddenly takes more than twice the amount of memory (since 19-05 around 1:00). Resulting in crashing processing the cubes or even a single table. Really frustrating. I tried and checked a lot last 2 days, without any result so far.
-
Dennis van Geffen • 0 Reputation points
2025-05-20T20:12:51.55+00:00 I saw something strange in the consumption of memory (after the issues started). One week ago (before issues) loading a single table (one in a sequence of partitions with 180mi rows each) resulted in gradually taking memory over a period of 45 minutes. You could see the memory consumption rising and falling back a little a few times during the process. But now for a long time no memory seems to be consumed (although I can see the gateway server is sending data)... then at the end, in little time, 15gb of data is consumed (in like 2 minutes, and I dont think our internet line is that fast). Ps: I have set the gateway to send data right away. Maybe this is a clue?
-
Remy Kamphuis | Victa • 5 Reputation points
2025-05-21T07:41:18.56+00:00 Hi Dennis,
We face exactly the same. The amount of memory processed stays at 8 GB for at least 20 minutes during the reload. Then suddenly the spike occurs and within the timeframe of 2 minutes the GB memory consumed goes up to > 60 GB. We increased our S1 tier to S2 to check if that would allow to reloads to succeed, but for the largest semantic model this still results in an error. In the screenshot below you can see memory consumption of the 4 semantic models loaded separately. The first three models succeed since we have scaled to S2. The last model does not, you can clearly see that there is hardly any memory consumption until 7:26 AM when it spikes from 10 to 63,6 GB!!! Crazy behaviour, especially when knowing that the first model and the last were running simultaneously only two days ago on a S1 tier, so with maximum 25 GB memory consumption!
You state that it started 19-5 around 01:00 AM. Can you see in the portal whether there was a planned update at that time? For us is thiss mess started with the planned update yesterday morning at round 3 AM, resulting in an error at the first scheduled reload at 6 AM after the update. See below:
-
Dennis van Geffen • 0 Reputation points
2025-05-21T08:03:01.0166667+00:00 Could be around 3 (the processing started around 1:00 that is why I mentioned that time :)
So the first bump is our cube (normal) the second & third bump 2 partitions, than something happened (the update?) the 3th partition failed. We where thinking about moving to S2 for the time being, but like I can see in you comment I was not sure it would work (e/o we ever can go back to S1 ones it is 'fixed again' by Microsoft). I am now looking into running the tables one-by-one ("maxParallelism": 1) because this is becoming a big problem now. Change the model (fi. partitioning) would break the fact tables and as long as I am not sure I can reload them, it is not a risk I want to take. Waiting for our azure supplier to react now. Hope they have a solution. PS: where did you find the Azure AS update log can't find it....
-
Dennis van Geffen • 0 Reputation points
2025-05-21T08:05:35.8833333+00:00 our data scientist just found this: Azure Analysis Services Server Performance Event (5/20/25) – EBM Software
This could be it.... would be nice if this was posted on the official Microsoft 'AAS service health' page :(
-
Remy Kamphuis | Victa • 5 Reputation points
2025-05-21T08:42:25.41+00:00 That should be the cause indeed. You can find the update logs in the menu on the left under Ondersteuning en Probleemoplossing or Support + Troubleshooting.
-
Daniel Bentham • 0 Reputation points
2025-05-21T08:48:07.41+00:00 We are experiencing the exact same issue since Monday morning:
-
Dennis van Geffen • 0 Reputation points
2025-05-21T14:57:58.7966667+00:00 We got confirmation from Microsoft that our memory issue is related to the 19-05 update (and probably your issues to). They are going to fix it in 'a few days'. This was a stressful period for me, but now I have some "peace of mind" ;)
Sign in to comment