For a stored procedure that suddenly runs for hours, focus first on identifying the exact bottleneck, then use index/statistics work as a supporting action rather than the primary strategy.
- Identify the actual expensive queries
- Use DMVs to find the high‑CPU and long‑running statements while the procedure is running. The guidance for high CPU and slow queries is:
- Identify queries currently consuming CPU, then, if needed, look at historical CPU‑bound queries.
- For each top query, capture the actual execution plan and logical reads.
- For slow queries in general, use the same approach: identify the specific statements inside the procedure that are waiting or consuming CPU, then analyze their plans.
- Statistics before index rebuilds
- Index rebuilds are expensive in CPU, I/O, and log usage. They should not be run just because fragmentation is >30%.
- A rebuild has a powerful side effect: it fully updates statistics on the index key columns (equivalent to
UPDATE STATISTICS ... WITH FULLSCAN) and forces recompilation of dependent plans. Many “improvements after rebuild” are actually due to better statistics, not reduced fragmentation. - Because of this, for performance tuning of a long‑running procedure, updating statistics is often the first and cheaper step:
- Update statistics on the tables used by the expensive queries (key columns and important non‑indexed columns) with a higher sampling rate or FULLSCAN where justified.
-
sp_updatestatscan be used to refresh all stats in a database, but for a specific procedure, target only the relevant tables.
- The resource cost of updating statistics is minor compared to index rebuild and often completes in minutes, whereas rebuilds can take hours.
- How to decide on index rebuild vs. reorganize
- Do not rely on a single fragmentation threshold (for example, “>30% = rebuild”) in isolation.
- Consider at least these together:
- Fragmentation level
- Page density (how full pages are)
- Page count (very small indexes often do not benefit from rebuilds)
- Workload pattern (OLTP vs. reporting, heavy range scans vs. point lookups)
- How often the table is “burn and build” (bulk delete/insert patterns can make fragmentation transient)
- For heavily updated or “burn and build” tables, review fragmentation and page density after large delete/update operations. Rebuild or reorganize only where fragmentation and low page density are clearly affecting the specific queries identified in step 1.
- Since Ola’s solution already handles fragmentation based on thresholds, use it as baseline maintenance and reserve ad‑hoc rebuilds for cases where a particular index is clearly implicated in a bad plan.
- High CPU and memory during the run
- When CPU is high and the procedure is active, use the high‑CPU troubleshooting pattern:
- Identify the top CPU‑consuming queries.
- Examine their actual execution plans.
- For each, apply:
- Statistics updates (as above).
- Missing index analysis using the missing index DMVs, then create only those indexes that clearly reduce logical reads for the target queries.
- Query rewrites to improve SARGability (avoid functions on columns in predicates, mismatched data types, non‑searchable patterns) and to remove unnecessary row goals (TOP, FAST N, etc.) when they cause bad nested loops.
- If, after statistics updates and targeted indexing, CPU is still high, focus on query and schema design rather than more index maintenance.
- Handling redundant/unused indexes
- Once the main performance issue is understood, review indexes used by the expensive queries and those never used.
- Dropping unused or redundant indexes can help overall performance by reducing write and maintenance overhead, but do this after stabilizing the critical workload.
- Extended Events instead of Profiler
- For ongoing troubleshooting of long‑running procedures, Extended Events is preferred over Profiler.
- Typical approach:
- Create an Extended Events session to capture:
- Long‑running statements (by duration or CPU).
- High logical reads.
- Wait information for sessions running the stored procedure.
- Filter by database and/or object name where possible to keep the session lightweight.
- Create an Extended Events session to capture:
- Use the captured data to:
- Identify which nested procedure or statement is slow.
- Correlate waits (for example, THREADPOOL, I/O waits) with specific queries.
- Maintenance cadence in this scenario
- With Ola’s solution:
- Keep weekly index maintenance, but avoid adding more frequent rebuilds unless a specific index is proven to be the cause.
- Ensure statistics are updated often enough. For busy OLTP workloads, daily or even more frequent targeted stats updates on critical tables may be needed.
- For Configuration Manager and similar workloads, guidance is to:
- Perform index maintenance as often as once a week.
- Update statistics as often as once a day. This pattern is a good reference: statistics more frequent than index rebuilds.
In summary: for a stored procedure running for hours, prioritize identifying the specific high‑CPU/long‑running queries and improving their plans via statistics, targeted indexes, and query rewrites. Use fragmentation and index rebuilds as a secondary, evidence‑based optimization, not based solely on a fixed “>30% fragmentation” rule.
References:
- Troubleshoot slow-running queries in SQL Server
- Troubleshoot high-CPU-usage issues in SQL Server
- Optimize index maintenance to improve query performance and reduce resource consumption
- Configuration Manager site sizing and performance FAQ
- Is it okay to run Ola Hallengren’s index rebuild and stats update script from a separate database instead of master, and will this impact fragmentation handling?
- Tools for SQL Query Tuning - Microsoft Q&A