Max Memory Setting on SQL Server

Stan Malachowski 51 Reputation points
2021-09-26T03:01:55.267+00:00

Hi,

I am having some very strange behaviour with a two-tier application with a SQL Server 2017 backend. The max memory size for SQL is the default : 2147483647, which seems a crazy value, but from what I understand, this version of SQL should be managing memory dynamically. We have a performance problem recently where update transactions from users seem to block for an unusually long time - 30ec to 2 minutes when previously, this took perhaps 5 seconds.

I have had high resolution metrics on the system and cannot spot anything obvious. However, the SQL Serer Memory Manager\Free Memory(KB) counter is crashing up and down every few minutes but the peak to trough is only 4GB down to around 120MB. The system has about 768GB of memory and 40 cores. It's been difficult, but some direct observations of the slow saves correlate with the trough on the free memory. The system does not appear to be hard faulting (page reads/sec), but the page faults are continually toggling from 10's (ie really low) to thousands every 5 seconds or so.

My questions are:

  1. Does the max server memory actually need to be set and what would be a sensible number?
  2. Are there any other metrics I could consider to see if there is a memory management problem?

I have attached an image of the trace for free mem (converted from KB to MB). The circle shows where a user reported slow performance.

135227-mem.png

Thanks,

Stan

SQL Server Other
0 comments No comments
{count} votes

Accepted answer
  1. Erland Sommarskog 121.4K Reputation points MVP Volunteer Moderator
    2021-09-26T19:32:05.597+00:00

    The theory is that you can leave "max server memory" to its default, and SQL Server will manage the memory for you. That is, it will grab as much as it can, and then yield memory if there is memory pressure in the machine, that is some other process wants the memory.

    In practice, many DBAs do always set this value to make sure that the OS has space to breathe. And if there other services or applications running on the machine, there is all reason to cap SQL Server. Even more so if there are multiple instances of SQL Server running on the machine.

    If SQL Server is the only thing running in this box, I would probably set max server memory to 700000, although there is not much science behind that number.

    You mention page faults. But are they page faults in general or with the SQL Server process? SQL Server should preferably not page-fault, and there is a cure for this, to wit, grant the service account for SQL Server the Windows permission "Lock pages in memory". I think that with this setting, it is even more important to set "max server memory".

    But your real problem seems to be that some update operations are taking longer time. I don't think these considerations on memory are related to this problem. Rather this is likely to be due to a change in query plans. Are you on SQL 2016 or later and have enable Query Store for the database? In such case you have a wealth of material to dig into.


0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.