Proportional fill and how to check whether it works
Being on a business trip in Japan a customer described a case where they had their SAP system running on SQL Server. In their opinion they did everything right in terms of using our proportional fill features as it is described https://download.microsoft.com/download/d/9/4/d948f981-926e-40fa-a026-5bfcf076d9b9/SAP_SQL2005_Best%20Practices.doc. However after the last bigger archiving session where SD and FI objects of a whole year got archived, they saw that one file was emptied to a bigger degree than all the other files. Means they ended up with rather different free space portions in their files which were evenly sized. Using proportional fill with evenly sized files all the time this shouldn’t happen obviously. However the problem was in this specific case, that it the customer couldn’t tell whether the files of the SAP database always were managed in the most optimal way. Means it could have happened that years back when the data of the year which just got archived, got inserted one database file may have been larger or all the other database files might have been filled up already. There is no possibility to investigate whether this happened many years later. However let’s step through two short procedures to check whether the actual read and write activity is balanced between the data files and whether the data of a specific table is stored as well to even parts throughout all data files as we desire it to be for the way SAP databases should be created on SQL Server.
How do we check whether proportional fill works fine at the moment? Under the assumption that all data files have the same free space (even independent of the size of the data files), the following query batch would give us some indication whether proportional fill is working in even proportions over all data files:
--Query1:
declare @avg_read bigint, @avg_write bigint
select @avg_read=avg(io.num_of_reads), @avg_write=avg(io.num_of_writes)
from sys.dm_io_virtual_file_stats(db_id(), -1) io, sys.database_files df
where io.file_id=df.file_id and df.type_desc = 'ROWS'
select io.file_id, df.name, io.num_of_reads, @avg_read as 'Avg Reads',
ceiling(((convert(decimal(10,0),num_of_reads)/@avg_read)*100)-100) as 'Percentage off Read Avg', io.num_of_writes, @avg_write as 'Avg Writes',
ceiling(((convert(decimal(10,2),num_of_writes)/@avg_write)*100)-100) as 'Percentage off Write Avg' from sys.dm_io_virtual_file_stats(db_id(), -1) io, sys.database_files df
where io.file_id=df.file_id and df.type_desc = 'ROWS'
The result is expected to look like:
file_id |
name |
num_of_reads |
Avg Reads |
Percentage off Read Avg |
num_of_writes |
Avg Writes |
Percentage off Write Avg |
1 |
Data1 |
42374382 |
42647992 |
0 |
2233030 |
2214157 |
1 |
3 |
Data2 |
42777611 |
42647992 |
1 |
2262091 |
2214157 |
3 |
4 |
Data3 |
42647377 |
42647992 |
0 |
2235077 |
Comments
Anonymous
May 31, 2009
PingBack from http://outdoorceilingfansite.info/story.php?id=18528Anonymous
January 24, 2011
For this setup, what is your datafile growth rate ? Going with the recommended 10% maybe performance unfriendly even when on SAN. Your thoughts ..Anonymous
November 26, 2013
The script to find proportional data fill with io_virtual_stats has a small glitch and hence doesn't yield any results at all. Here is the correct script which should work all the time: DECLARE @avg_read BIGINT, @avg_write BIGINT SELECT @avg_read = Avg(io.num_of_reads), @avg_write = Avg(io.num_of_writes) FROM sys.Dm_io_virtual_file_stats(Db_id(), NULL) io, sys.database_files df WHERE io.file_id = df.file_id AND df.type_desc = 'ROWS' SELECT io.file_id, df.name, io.num_of_reads, @avg_read AS 'Avg Reads', Ceiling(( ( CONVERT(DECIMAL(10, 0), num_of_reads) / @avg_read ) * 100 ) - 100) AS 'Percentage off Read Avg', io.num_of_writes, @avg_write AS 'Avg Writes', Ceiling(( ( CONVERT(DECIMAL(10, 2), num_of_writes) / @avg_write ) * 100 ) - 100) AS 'Percentage off Write Avg' FROM sys.Dm_io_virtual_file_stats(Db_id(), NULL) io, sys.database_files df WHERE io.file_id = df.file_id AND df.type_desc = 'ROWS'