Hi All,
We have a big database ~5TB. We update the stats with full scan weekly once. It runs more than a day.Since maintenance window is limited, we exit out if it is taking more time. My question is, what is best approach to be taken to update stats for fairly large and highly volatile tables ??? Do we need to break all the small tables in a seperate job and put all the big tables and update stats as a separate job? Please give some ideas? I even tried to increase maxdop to 6 and changing the sample to 70. if I change the sampling to 70, we are seeing performance issues over that particular week. So, I had to revert it back to 100 sampling.
We are also, do an exercise of cleaning up all the backup tables created by application team as part of their weekly agile releases and also archiving the LOG tables which have data more than 3 years. Please provide some inputs on updating stats for large highly volatile tables. How to deal with them. Please share some of your real time experiences.
--below is the code schedule inside a sql agent job.
EXECUTE [SQLDBA_utils].[dbo].[IndexOptimize]
@Databases = 'dbname',
@MaxDOP = 4,
@UpdateStatistics='ALL',
@StatisticsSample = 100,
@OnlyModifiedStatistics='Y',
@TimeLimit=86400, -- 24 hours in secs -- if it runs more than 24 hours, exit .
@LogToTable = 'Y'
Regards,
Sam