About Maintenance Plans – grooming SQL Server
Latest update: 10/24/2016 (Check change log)
Download scripts on GitHub.
Hello all,
I admit I’m not a big fan of Maintenance Plans (the feature that goes by that name) mainly because of its “do or die” approach. So, it was not by chance that I authored and keep updating the AdaptiveIndexDefrag procedure.
Good maintenance (or lack thereof) may be the difference between good and bad performance.
Following up on that train of though, over the years I've used this solution to automate maintenance tasks in SQL Server (except for backup, which is usually handled using dedicated backup solutions/agents).
With these scripts, within a couple minutes you can deploy a comprehensive set of SQL Agent jobs that will get the required work done.
So, the first script I execute is “0_database_server_options”. This will:
- Change a few sys.configurations such as “Backup compression default”, enable “remote admin connections” and “Optimize for Ad Hoc workloads”. A couple of notes on this step:
- About Backup Compression…
Fact 1: backup compression significantly increases CPU usage.
Fact 2: Backup compression is not only a space saving feature, but because a compressed backup is smaller than an uncompressed backup of the same data, compressing a backup typically requires less device I/O and therefore usually increases backup speed significantly. Hence, restore operations will also share from this reduced I/O.
Also, with compressed backups as default, it is possible to issue backup commands with compression to databases that will not benefit from it, but may even be detrimental in such cases. Therefore, examples such as TDE-enabled databases can be backed up using the WITH NO_COMPRESSION notation, while omitting any compression related notation on a server that has backup compression enabled at the server option will compress backups.
If the server is not CPU bound, and if most of the databases in your server benefit from this feature, making backup compression a default for all backup operations will generate smaller backups, therefore less backup related I/O. - About Optimize for Ad-Hoc workloads…
While the Optimize for Ad-Hoc workloads server option effectively adds a small delay in a second execution, the benefits outweigh the cost in an OLTP environment: if a query is executed countless times, the query plan only gets stored in cache on the second execution which is probably not a major drawback, but if not, memory is saved by not caching query plans that won't get used.
With this option enabled, when a query executes the first time, only the query hash (or stub) is stored in cache. If the same plan is reused, then it is deemed fit for storing the entire plan in cache.
Enable the Optimize for Ad-Hoc workloads server option, if most of the workload on the server is mostly made of ad-hoc queries in an OLTP server.
- About Backup Compression…
- Change the Errorlog file limitation from 8 to 15. Always nice to have information available when troubleshooting.
- Change the model data and log files to autogrow 100MB. Obviously this is a generalization, but still more up to date than the default.
- Remove AUTO_CLOSE and AUTO_SHRINK database options from all databases.
- Change whatever page verify setting to CHECKSUM.
- Sets proper MaxDOP setting for server in scope.
- Sets proper Max Server Memory setting for one standalone instance in current server. Comment out if not your scenario.
Second, and if there is no other monitoring system, I use the script “1_DatabaseMail_Config”. Just edit the script and enter the proper account information in the configuration variables near the top and run it. Replace with the information for your account.This script will:
- Create a Database Mail profile. Usually there is a distribution list for the DBAs, so I‘m keen on using that address.
- Create an operator using that Database Mail profile.
Third, I deploy the AdaptiveIndexDefrag procedure, usually in MSDB unless there is already a DBA database.
Fourth, I use the script “3_job_AdaptiveIndexDefrag”. This will create a daily job for the AdaptiveIndexDefrag procedure, named “Daily Index Defrag”. It will also notify the previously created operator on the job outcome. The job will always:
- Find some of the most common (default) names for Microsoft shipped databases (step “DB Exceptions”), to add the to the permanent exclusion list, if not already there. For example, SharePoint grooms its own databases, so we should exclude them from any other automated maintenance task. If the AdaptiveIndexDefrag procedure was NOT created in MSDB , simply replace all references to MSDB for whatever database name you chose.
- Execute the AdaptiveIndexDefrag procedure (step “Daily Index Defrag”).
Note that in this job there are a few differences from the default values for the procedure. We are doing online index rebuild, updating all statistics, Sorting in tempDB (assuming tempDB is properly sized) and restricting maxDOP to 4. - Purge all historic log data for the index defrag executions using default 90 days (step “Purge Log”).
Fifth, I use the script “4_job_AdaptiveCycleErrorlog”. As the name suggests, this creates a job named “Daily Cycle Errorlog”. Runs daily, but will only cycle the Errorlog when its size is over 20MB or its age over 15 days.
Sixth, using the script “5_job_Maintenance” (“5_job_Maintenance_MEA” if you are in this region, as it affects weekends), create the job “Weekly Maintenance”, a stored procedure to handle the logic and a view in MSDB that allows to quickly check the output for each job step, XML formatted so that it’s easier to view. The weekly actions aim to execute on Fridays for MEA and Sundays for the rest of the world. This job will:
- Perform integrity checks, as we recommend doing so at least once a week. On large databases and for more frequent checks, consider using the Physical parameter.
If you need to exclude some database from this step, simply edit the following statement to add your exclusions in the usp_CheckIntegrity stored procedure , either by name or database_id, as follows:
SELECT database_id FROM sys.databases WHERE is_read_only = 0 AND state = 0 AND database_id <> 2
to something like
SELECT database_id FROM sys.databases WHERE is_read_only = 0 AND state = 0 AND database_id <> 2 AND database_id NOT IN (somedbid1, somedbid2, somedbid3)
- Once a week, execute DBCC CheckDB on all online, read-write user databases below 1TB.
- Daily, execute a subset of all the required integrity checks on all online, read-write user databases over 1TB.
To prepare for this step, all tables and indexed views are divided in similar subsets, or "buckets", and each day we will verify a specific bucket. Over the course of one week, your VLDB will be checked as part of your proactive integrity checks.
- On Sundays, execute DBCC CHECKALLOC (if VLDBMode = 0) and DBCC CHECKCATALOG over the database,
Also executes DBCC CHECKTABLE (if VLDBMode = 0) or DBCC CHECKGFILEGROUP (if VLDB Mode = 1) for all the objects in the 1st bucket. - On all remaining days, execute DBCC CHECKTABLE(if VLDB Mode = 0) or DBCC CHECKGFILEGROUP (if VLDB Mode = 1) for all the objects that belong to that specific daily bucket.
- Once a week, execute DBCC UPDATEUSAGE which corrects the rows, used pages, reserved pages, leaf pages and data page counts for each partition in a table or index. On this, although the best practice is to run routinely only if the database undergoes frequent Data Definition Language (DDL) modifications, such as CREATE, ALTER, or DROP statements, I find that running this on a weekly basis on all online, read-write user databases under 4GB in size does not pose an issue, and so I run it.
- Once a week, executes sp_createstats with indexonly option on all online, read-write user databases. This creates statistics only on columns that are part of an existing index, and are not the first column in any index definition. By creating single-column statistics we increase the number of histograms, which can improve cardinality estimates, query plans, and query performance.
- Once a week, clean all MSDB job history older than 30 days. Change this value if you have different requirements.
- Once a week, clean maintenance plans txt reports older than 30 days. This is only needed if you still have some specific maintenance plan you wish to keep for some reason.
NOTE: although the job is scheduled on a daily basis, there is logic at the beginning of each step to execute the weekly steps on Sundays only (Fridays for MEA version). If you need to change the weekly steps for any other day, be sure to change the below statement(s):
IF (SELECT 1 & POWER(2, DATEPART(weekday, GETDATE())-1)) > 0
Where 1=Sunday, 2=Monday, 4=Tuesday, 8=Wednesday, 16=Thursday, 32=Friday and 64=Saturday.
NOTE 2: If a database has Read-Only filegroups, any integrity check will fail if there are other open connections to the database. We have a choice to either set the database to single user mode or create a database snapshot prior to executing integrity checks on such databases.
- CreateSnap Parameter:
- 1 will create a database snapshot before running the check on the snapshot, and drop it at the end (default).
- 0 means the integrity check might fail if there are other open connection on the database.
- Set a custom snapshot creation path with @SnapPath parameter or the same path as the database in scope will be used.
Ex.: @SnapPath = 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Data' - SingleUser Parameter
- 1 will set the database in single user mode before running the check, and to multi user afterwards.
- 0 means the integrity check might fail if there are other open connection on the database.
Seventh, and final step (although optional), I use the script "6_Agent_Alerts" to create SQL Agent based alerts for important error messages, when there are no other monitoring tools to warn the DBA team of important events in SQL Server. The alerts cover the following error codes and severities:
- Severity 10 - Error(s) 825, 833, 855, 856, 3452, 3619, 17179, 17883, 17884, 17887, 17888, 17890 and 28036
- Severity 16 - Error(s) 2508, 2511, 3271, 5228, 5229, 5242, 5243, 5250, 5901, 17130 and 17300
- Severity 17 - Error(s) 802, 845, 1101, 1105, 1121, 1214 and 9002
- Severity 19 - Error(s) 701
- Severity 20 - Error(s) 3624
- Severity 21 - Error(s) 605
- Severity 22 - Error(s) 5180 and 8966
- Severity 23 - Error(s) 5572 and 9100
- Severity 24 - Error(s) 823, 824 and 832
Having these (or other) scripts to deploy on your SQL Servers saves you the trouble of implementing Maintenance Plan objects, and gives you much better control and flexibility on what is happening on your maintenance cycles. After these have been running on a regular basis, you will know that your SQL Server is being proactively managed on what are the most common maintenance tasks. Of course all the logs should be reviewed regularly so that issues can be corrected.
Note: at scripts 1, 3, 4, 5 and 6, remember to set the appropriate SQL Operator name with the @customoper parameter at the top of each job or alert creation. If the set Operator is not found, namely if you do not want to use email alerts, jobs will still get created, but no email notifications are sent.
Note2: at scripts 3, 4, 5 and 6, remember to set the appropriate job owner name with the @jobonwer parameter at the top of each job. Defaults to the sa account.
Hope you find these useful.
Change log:
Version |
Date |
Comments |
1 |
18-09-2012 |
Initial release; |
1.1 |
25-10-2012 |
Added script "6_Agent_Alerts.sql";Fixed issue with "1_DatabaseMail_Config.sql" that gave an error when creating the operator;Added notifications by email to the operator created above. |
1.2 |
12-09-2013 |
Changed system database exclusion choices where used;Changed integrity checks to always run with data purity checks;Changed Daily Index Maintenance job to automatically exclude Sharepoint databases (based in known naming patterns). |
1.2.1 |
08-10-2013 |
Changed "Daily Index Maintenance" job to automatically exclude Sharepoint and Biztalk databases (based in known naming patterns). |
1.2.2 |
17-10-2013 |
Changed "Weekly Maintenance" job to implement partial integrity checks on VLDBs over the course of a week. Also changed scheduling accordingly. More information in the above setp description. |
1.2.3 |
24-10-2013 |
Changed "Daily Index Maintenance" job to extend automatic exclusion of known MS Product databases that should handle their own defrag scheduling. |
1.2.4 |
29-10-2013 |
Fixed error in step 3 of "Weekly Maintenance" job. |
1.2.5 |
13-11-2013 |
Fixed issue with "AdaptiveCycleErrorlog" job that would not cycle errorlog;Added VLDBMode parameter for VLDB handling in step 5 of "Weekly Maintenance" job, now bucketizing by Tables or Filegroups. More info in the job description above.Added SingleUser parameter in step 5 of "Weekly Maintenance" job, to circumvent issue where running a DBCC statement fails in databases with Read-Only Filegroups if there are other open connection on the database, by setting the database in scope to single user mode prior to integrity checks. |
1.2.6 |
09-01-2014 |
Fixed SingleUser parameter use. |
1.2.7 |
09-02-2014 |
Moved Integrity Checks logic in step 5 of "Weekly Maintenance" job to a stored proc;Added CreateSnap parameter in step 5 of "Weekly Maintenance" job, to circumvent issue where running a DBCC statement fails in databases with Read-Only Filegroups if there are other open connection on the database, by setting the database in scope to single user mode prior to integrity checks. This is the default behavior for databases with Read-Only Filegroups. |
1.2.8 |
18-02-2014 |
Fixed issue with databases and mathematical symbols in name with step 5 of "Weekly Maintenance". |
1.2.9 |
10-05-2014 |
Changed VLDBMode usage for VLDB handling in step 5 of "Weekly Maintenance" job, where buckets are created only once every week;Added MEA version of "Weekly Maintenance" - Regular version does most actions on Sundays, MEA version does most actions on Fridays. |
1.3 |
20-11-2014 |
In step 5 of "Weekly Maintenance" job (Integrity Checks), added a couple parameters: @AO_Secondary, setting in which AlwaysOn replicas we want to run integrity checks (Default is AlwaysOn primary replicas are eligible for Integrity Checks, but secondary replicas are skipped). @Physical, which allows running PHYSICAL_ONLY on DBCC commands that support this option (Default is OFF). |
1.3.1 |
09-12-2014 |
Fixed XML character conversion issue in view vw_MaintenanceLog (step 5 of "Weekly Maintenance" job);Changed "Daily Index Maintenance" job to automatically exclude SCOM databases (based in known naming patterns). |
1.3.2 |
11-03-2015 |
For step 1 "1_DatabaseMail_Config", added @customoper parameter to customize the SQL Operator (if used) creation;For steps that create jobs or alerts, added @customoper and @jobowner parameters to customize the SQL Operator (if used) and job owners more easily. If the set operator is not found (namely if you do not want to use email notifications), jobs still get created;Fixed issue in view vw_MaintenanceLog (step 5 of "Weekly Maintenance" job) where no log information was seen; |
1.3.3 |
10/24/2016 |
For step 0 "0_database_server_options", added step to properly set MaxDOP setting for server in scope and properly set Max Server Memory setting for one standalone instance in current server. |
Until next time!
Disclaimer: I hope that the information on these pages is valuable to you. Your use of the information contained in these pages, however, is at your sole risk. All information on these pages is provided "as -is", without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by Ezequiel. Further, Ezequiel shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages.
Comments
- Anonymous
September 18, 2012
The comment has been removed - Anonymous
September 19, 2012
The comment has been removed - Anonymous
September 20, 2012
Hey Eduardo,The 1st script is fixed now. Thanks for noticing the typo.About the "dba_db" its just meant to be changed for whatever database a DBA chooses to create the logic. I came to realize most people that use it, deploy it in MSDB. Furthermore, the subsequent jobs, as they are scripted, will look for the sproc in MSDB, so that also needs to be changed to the corect database as needed.Cheers - Anonymous
September 20, 2012
Gotcha,Also check that the jobs are expecting the objects to be on the msdb.Then I suggest a "use msdb" instead with some comment above it clarifying that.Cheers - Anonymous
September 20, 2012
Right, what I did just now was highlight the notes/warnings on that topic that were already in the above post. Also took you advice on changing the default context to msdb in AdativeIndexDefrag - but please note that it can be deployed anywhere.Thanks!PL - Anonymous
September 20, 2012
Very nice article - Anonymous
September 20, 2012
The comment has been removed - Anonymous
October 29, 2012
Great <a href="www.constantconcierge.com/">maintenance plan</a>. - Anonymous
July 22, 2013
The comment has been removed - Anonymous
July 22, 2013
Hello João,You mean the step 2 in the script 5_job_Maintenance.sql? Because I cannot find a "SET @dbname = QUOTENAME(@dbname)" string in the code. Maybe you're using an older version?Thanks! - Anonymous
July 23, 2013
The comment has been removed - Anonymous
February 18, 2014
The comment has been removed