Share via

Azure Flexible Server Memory issues

Matt Monk 0 Reputation points
2026-03-02T20:48:22.4966667+00:00

Hello,

I am currently having memory usage issues with my Azure MySQL flexible server in which the memory grows and never reaches a plateau after nearly 2 weeks post restart. I am pro actively having to restart the server once it reaches 95% memory to ensure it doesn't cause any OOM issues.

This is a business critical instance with buffer pool set to 55% of memory and max connections set to 300.

None of the internal MySQL parameters are growing uncontrollably so I do not understand where the rapid growth is coming from. There are no slow queries and temp tables are optimised as best as possible.

I have had a number of memory issues ever since moving to Azure flexible over 2 years ago so this is extremely frustrating. Recently the growth has been managable with a restart only needed once every 6 weeks but after the most recent maintenance it has got considerably worse and as I say the server needs a restart after less than 2 weeks to reset memory.

The only suggestions I can get from Azure support are to scale up the server but this isn't resolving the issue. The server has more than enough memory for my requirement but as I say the memory growth is uncontrollable.

Many Thanks,
Matt

Azure Database for MySQL

1 answer

Sort by: Most helpful
  1. Sina Salam 28,691 Reputation points Volunteer Moderator
    2026-03-04T11:11:14.5766667+00:00

    Hello Matt Monk,

    Welcome to the Microsoft Q&A and thank you for posting your questions here.

    I understand that you are having Azure Flexible Server Memory issues.

    The memory escalation you are experiencing on Azure Database for MySQL Flexible Server is not tied to MySQL configuration, workload, or server sizing but instead matches a platform‑level memory regression introduced after recent Azure maintenance. Azure’s architecture shows that memory usage includes OS services and Azure agents, which can grow independently of MySQL engine allocations, making the plateau you expect impossible when a leak exists. This behavior requires backend engineering diagnostics because these host‑level components are inaccessible to customers. - https://learn.microsoft.com/azure/mysql/flexible-server/how-to-troubleshoot-low-memory-issues

    To validate that MySQL itself is not responsible, you can query real engine‑level allocation using the sys.memory_global_by_current_bytes view, which isolates InnoDB and per‑component memory from platform processes.

    SELECT * FROM sys.memory_global_by_current_bytes

    ORDER BY current_allocated DESC;

    This aligns with Azure’s documented monitoring practices that distinguish internal MySQL memory from system‑level consumption. You should request engineering review for potential memory fragmentation, agent growth, or allocator regressions at the OS layer. - https://stackoverflow.com/questions/76303522/azure-mysql-server-out-of-memory-issues

    While awaiting engineering intervention, configure automated restarts triggered by Host Memory Percent metrics to prevent operational disruption, and temporarily reduce maximum connections to minimize per‑connection memory pressure noted in Azure’s performance guidelines. These mitigations ensure stability until Azure resolves the underlying platform leak. For memory tuning, architectural behavior, and best practices, refer to the Microsoft Tech Community guide on Flexible Server memory management: Memory Tuning for Workloads in PostgreSQL Flexible Server.

    I hope this is helpful! Do not hesitate to let me know if you have any other questions or clarifications.


    Please don't forget to close up the thread here by upvoting and accept it as an answer if it is helpful.

    Was this answer helpful?


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.