Compartir a través de


If > 50 VLFs are bad, then 16 is good, right?

Exhibit A:

 256 GB transaction log file with 16 VLFs.  Doing the quick math that would make each VLF 16GB.  16 x 16GB VLFs = 256GB transaction log. (yes, that is a large transaction log)

All this guidance I see says 50 VLFs would be too many.

Okay, not too many VLFs, so we should be fine… Matter of fact, if 50 is too many isn't 16 awesome?

Well, no. 

If you have too few VLFs, and more importantly VLFs that are too big, you will see poor performance from any of the processes that need to process these big VLFs.  These include your transaction log backups, transactional replication and database mirroring.

Transaction log backups will suffer when a large VLFs is finally filled and inactive, ready for backup and clearing.  In the above example 16GB of log will have to backup and a 16GB chunk of log will need to be cleared for reuse.  How fast is your backup environment?

Transactional replication processes the transaction log using the Log Reader.  Only committed transactions are replicated so it has to process through 1 or more 16GB VLFs (depending on how large the transaction was). These very large VLFs can cause additional latency in your replication. 

Database mirroring sends to its mirroring partner committed transactions extracted by the logging system itself  within the Storage Engine. The logging system is also going to feel the burden of an enormous VLF.  

Now consider a system that is replicating and mirroring - ouch!

So how do you find the  size/count balance?

First, evaluate how many VLFs a transaction log has (DBCC LOGINFO) and run a sanity check and see if it is a reasonable number for the file size.

Both 16 VLFs for a 256GB transaction log and 39 VLFs for a 11 MB transaction log fail the sanity check.

Next,  how big is each VLF? This is also in DBCC LOGINFO.  You will notice that the segments are probably not all exactly equally sized and the individual sizes may vary quite a bit.  This is the nature of how transaction log files are sized and grow and there are excellent details on this in  Kimberly Tripp's  [blog|twitter]  blog here

Is the size too big for an efficient transaction log backup (or log reader) process?

I have had excellent success with VLFs sized around 500 MB, but for large transaction logs this may create too many VLFs. 

Taking Exhibit A, let's examine our options;

  • Keeping our VLF count to 50 will give us VLFs around 5GB is size
  • Creating 512 MB VLFs will create 512 VLFs
  • A 1 GB VLF will give you 256 VLFs
  • A 2 GB VLF will give you 128 VLFs

In this specific example the transaction rate, whether replication or mirroring is in use, whether clustering is in use, the frequency of the transaction log backups and the throughput of the transaction log backup process needs to be analyzed to identify the best VLF size/count combination for this environment. 

When all is said and done, okay, so there are more than 50 VLFs,  but each VLF is sized so that there are no performance issues with the transaction log backups and the number of VLFs is reasonable for the total transaction log file size and no where near the "so many it is painfully funny" realm that results in recovery and other problems.

 

Corrected 4/9/13: Thanks to Paul Randal of SQLskills.com (Blog | Twitter) for reading my blog and pointing out an error on how database mirroring gets its transaction log entries. As usual every time I interact with Paul I learn something.