Partager via


SQL Server Page Life Expectancy

Author: David Williams (Microsoft)
Contributor & Reviewer: Matthew Robertshaw (Microsoft)

Ever wondered how volatile your Buffer Pool is? Keeping you awake at night?

Page Life Expectancy (PLE) is the best indication of how volatile your Buffer Pool is (BP). It's a PerfMon counter, found in the SQL Server:Buffer Manager PerfMon object.  Monitor it every 3-5 seconds or so. There is also the Buffer Node:Page Life Expectancy counter which should be considered for NUMA systems, using the same logic per node as one would on a non-NUMA system.

Volatility is measured by taking the average "life" of a page within the Buffer Pool (in seconds). If a page is overwritten or aged out, it starts a whole new life.

So if lots of pages are being overwritten with new data very often, the average PLE will be low, and our BP volatility will be high.  

Conversely if most pages in our BP remain there for a long time without being overwritten, the average PLE will be high, and the BP volatility will be low.

So why do we care about BP volatility and PLE? What can knowing the BP volatility do for us? What is a "good" and "bad" PLE figure?

PLE can be a measure of how much physical IO your SQL Server is doing. Hopefully I've got your attention, because physical IO is a major performance concern, both for reading and writing.

Let's say you're loading a large amount of data into SQL Server. You want this to get into SQL Server as quickly as possible, so you're using SSIS with lots of parallel threads, hash partitions on the table, no indexes, and all the other tricks described here: https://msdn.microsoft.com/en-us/library/dd537533(v=sql.100).aspx.

Because nothing goes onto the page without first going into the BP, your BP will be flooded with this new data. Your BP pages are being overwritten by as much physical IO as possible on the read, therefore your PLE dips sharply. If you're doing a big data load you want your PLE to be as low as possible, because this means the data load is going quickly.

Now let's say you're running a big report using Reporting Services. We don't want to do much physical IO (if any) because having to read data pages from disk will slow us down. In this case we want our BP volatility to be as low as possible, and our PLE to be as high as possible (or as high as necessary for the query to hit SLA, see future post on query SLA's).

So depending on the activity on the server, PLE can give us an indication of how efficient our physical IO/BP ratio is.

We therefore have to be aware of the dominant activity type on the server before we make a call on whether PLE is "good" or "bad".

The old recommendation was that PLE should have a minimum of about 300 seconds. This was from the days when SQL's BP was around 4GB or so. This therefore meant that for a read-mostly activity such as a report a PLE of 300 meant that the SAN was reading 4GB over 5 minutes, which calculates to about 3.4MB/s.
These days we have BP's around the 64GB and above. So our 300 second threshold now means that the SAN would be reading about 218.5MB/s, which is a fair amount and likely to cause comment!

So what's a "good" PLE for a read-mostly operation like a report? Here's a handy formula I use to get a decent estimate:

PLE threshold = ((MAXBP(MB)/1024)/4)*300

Where MAXBP(MB) is the maximum amount of Buffer Pool memory (in MegaBytes) over the report run you're interested in. You can use PerfMon to trace the maximum value of SQL Server:Buffer Manager:Database Pages over your reporting run.

Take that number of pages and convert to MB: (pages*8)/1024 then substitute that for ‘MAXBP(MB)’ in the above formula to get your minimum PLE.

The above formula considers each 4GB of BP (per NUMA node or otherwise) to have it’s own 300 second PLE. Therefore the more BP you have allocated, the higher the PLE threshold.

If you want a "quick and dirty" way of calculating the PLE threshold then you can use the sp_configure "max server memory (MB)" value in the formula above, but this isn't really accurate because it doesn't account for stolen memory (memory taken from the BP, usually for plan caching, compilation, optimisation etc), and is even less accurate for SQL Server 2012 because we have lots of new stuff considered in the "max server memory (MB)" setting for 2012.

If you're reading this because you're just after a threshold for PLE and you either know how to investigate low PLE, or you don't care, you can stop reading now.

However if you're interested in what to do when PLE crashes, read on!

PLE is an amazing counter - you can use it to quickly direct you straight towards the problem areas if you know a few little tricks.

If PLE is low there must be a reason for this - knowing the reason is the key to investigating a performance problem through PLE.

If you're doing a big load, you want PLE to be low, so that's fine. If we're doing read-mostly activities and it's under the threshold, here's the top 3 reasons:

1) The queries being run have changed to look at data that they weren't before, hence we now read in this new data into the BP. Or you're reporting just after a data load of different data, or just a large data load.

2) The queries being run are inefficient, and are reading unnecessary data into the BP which is therefore being constantly turned over

3) There are large numbers of efficient queries (or a few very large efficient queries) which are saturating the BP

Here's what you do for each of these situations

1) This is normal, but it should be short-lived. You'd expect to see the PLE drop off a cliff, then build back up over the threshold and stay there until the next shift in data requirement. If it drops and stays there, you're probably in situation 2 or 3

2) You can find this out by looking at the top queries for physical IO usage and checking out the query plans. If you see lots of scans, there may be some tuning to be done! Since 80% of performance problems come down to poor T-SQL (which may therefore create inefficient Query Plans), consider this before you think you're in situation 3!

3) You've done your best, the queries are as efficient as they can get, but your PLE is still low. You can either spread out the queries (maybe running Agent Jobs one after the other instead of at the same time), or increase the amount of Buffer Pool memory. NEVER leave less than 1GB free for the OS, 2GB for servers with >64GB RAM, 4GB for servers with >128GB RAM. It's not that it's "free" and never used, the OS will use it and release it in much less than a second - less time than PerfMon or most tools' data gather interval, so it's being used but you don't get to see it. So if your PerfMon counter "Memory:Available Memory in MB" is under these thresholds and you're in situation 3, you need more physical memory so you can increase BP AND keep enough memory for the OS.

So there's PLE in a nutshell, look out for the next blog post on how to convince your Storage team that the Storage isn't working properly - I'll show you how to provide unequivocal proof, or how to tell if you're barking up the wrong tree!