Share via

Stored Procedure running longer hours

pdsqsql 436 Reputation points
2026-03-21T16:55:37.63+00:00

Hello,

I am facing currently issue as stored procedure running very longer hours although my DB is not that big means around 20 GB.

We are also running OLA's maintenance solution for Update Stats Daily and Index maintenance weekly basis.

Stored procedure is calling multiple Stored procedures and I am currently looking into the code but before I try to fixing code, I am reviewing Index Fragmentation, Update Stats and any missing indexes.

I am looking some clarification and guidance while reviewing the some of above things for tuning purpose.

I know even we are running Ola's maintenance solution, I can see some fragmented Indexes and stats not updated efficiently so I have started to Update the Stats with Full scan instead of Database level.

Do you think that Index rebuild based on >30% fragmentation report or I should consider Page Density, Page count or anything in combination?

I have also looked CPU and Memory while Process running and show maximum CPU and Memory utilization so I started to analyze those query and trying to add an Index.

I might later on look some redundant or unused Index and will decide to drop it.

Lot so tables we are doing kind of burn and build on a daily basis.

As Profile is old now so Extended Event set up will be the best idea and how I can set up to capture troublesome sql/objects?

Appreciate your help!

SQL Server | SQL Server Transact-SQL

2 answers

Sort by: Most helpful
  1. Q&A Assist
    2026-03-21T16:56:00.14+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    For a stored procedure that suddenly runs for hours, focus first on identifying the exact bottleneck, then use index/statistics work as a supporting action rather than the primary strategy.

    1. Identify the actual expensive queries
    • Use DMVs to find the high‑CPU and long‑running statements while the procedure is running. The guidance for high CPU and slow queries is:
      • Identify queries currently consuming CPU, then, if needed, look at historical CPU‑bound queries.
      • For each top query, capture the actual execution plan and logical reads.
    • For slow queries in general, use the same approach: identify the specific statements inside the procedure that are waiting or consuming CPU, then analyze their plans.
    1. Statistics before index rebuilds
    • Index rebuilds are expensive in CPU, I/O, and log usage. They should not be run just because fragmentation is >30%.
    • A rebuild has a powerful side effect: it fully updates statistics on the index key columns (equivalent to UPDATE STATISTICS ... WITH FULLSCAN) and forces recompilation of dependent plans. Many “improvements after rebuild” are actually due to better statistics, not reduced fragmentation.
    • Because of this, for performance tuning of a long‑running procedure, updating statistics is often the first and cheaper step:
      • Update statistics on the tables used by the expensive queries (key columns and important non‑indexed columns) with a higher sampling rate or FULLSCAN where justified.
      • sp_updatestats can be used to refresh all stats in a database, but for a specific procedure, target only the relevant tables.
    • The resource cost of updating statistics is minor compared to index rebuild and often completes in minutes, whereas rebuilds can take hours.
    1. How to decide on index rebuild vs. reorganize
    • Do not rely on a single fragmentation threshold (for example, “>30% = rebuild”) in isolation.
    • Consider at least these together:
      • Fragmentation level
      • Page density (how full pages are)
      • Page count (very small indexes often do not benefit from rebuilds)
      • Workload pattern (OLTP vs. reporting, heavy range scans vs. point lookups)
      • How often the table is “burn and build” (bulk delete/insert patterns can make fragmentation transient)
    • For heavily updated or “burn and build” tables, review fragmentation and page density after large delete/update operations. Rebuild or reorganize only where fragmentation and low page density are clearly affecting the specific queries identified in step 1.
    • Since Ola’s solution already handles fragmentation based on thresholds, use it as baseline maintenance and reserve ad‑hoc rebuilds for cases where a particular index is clearly implicated in a bad plan.
    1. High CPU and memory during the run
    • When CPU is high and the procedure is active, use the high‑CPU troubleshooting pattern:
      • Identify the top CPU‑consuming queries.
      • Examine their actual execution plans.
      • For each, apply:
        • Statistics updates (as above).
        • Missing index analysis using the missing index DMVs, then create only those indexes that clearly reduce logical reads for the target queries.
        • Query rewrites to improve SARGability (avoid functions on columns in predicates, mismatched data types, non‑searchable patterns) and to remove unnecessary row goals (TOP, FAST N, etc.) when they cause bad nested loops.
    • If, after statistics updates and targeted indexing, CPU is still high, focus on query and schema design rather than more index maintenance.
    1. Handling redundant/unused indexes
    • Once the main performance issue is understood, review indexes used by the expensive queries and those never used.
    • Dropping unused or redundant indexes can help overall performance by reducing write and maintenance overhead, but do this after stabilizing the critical workload.
    1. Extended Events instead of Profiler
    • For ongoing troubleshooting of long‑running procedures, Extended Events is preferred over Profiler.
    • Typical approach:
      • Create an Extended Events session to capture:
        • Long‑running statements (by duration or CPU).
        • High logical reads.
        • Wait information for sessions running the stored procedure.
      • Filter by database and/or object name where possible to keep the session lightweight.
    • Use the captured data to:
      • Identify which nested procedure or statement is slow.
      • Correlate waits (for example, THREADPOOL, I/O waits) with specific queries.
    1. Maintenance cadence in this scenario
    • With Ola’s solution:
      • Keep weekly index maintenance, but avoid adding more frequent rebuilds unless a specific index is proven to be the cause.
      • Ensure statistics are updated often enough. For busy OLTP workloads, daily or even more frequent targeted stats updates on critical tables may be needed.
    • For Configuration Manager and similar workloads, guidance is to:
      • Perform index maintenance as often as once a week.
      • Update statistics as often as once a day. This pattern is a good reference: statistics more frequent than index rebuilds.

    In summary: for a stored procedure running for hours, prioritize identifying the specific high‑CPU/long‑running queries and improving their plans via statistics, targeted indexes, and query rewrites. Use fragmentation and index rebuilds as a secondary, evidence‑based optimization, not based solely on a fixed “>30% fragmentation” rule.


    References:

    1 person found this answer helpful.
    0 comments No comments

  2. Erland Sommarskog 133.7K Reputation points MVP Volunteer Moderator
    2026-03-21T18:44:27.13+00:00

    For a 20 GB database, I would not bother very much about fragmentation. Overall, fragmentation is a bit of a thing of the past. Particularly, the type of fragmentation where pages that are logically adjacent in an index are not physically adjacent on disk. This mattered a lot with spinning disks. Far less so with SSDs. Fragmentation in the sense that pages may only be half-full still matters. But again, for this small database, no I don't think is where you should put your focus. It may actually not matter at all.

    You say that there are stored procedures that call other stored procedures. I get an unpleasant feeling of that this is code that is running one-row-at-a-time processing in loops. The best way is to rewrite the code into set-based operation, but that can be a major undertaking. When you run loops, indexes become even more crucial, also on smaller tables. A thing I've seen more than once is people who runs a poor man's cursor, by looping over a temp table like this:

    WHILE EXISTS (SELECT * FROM #tmp WHERE done = 0)
    BEGIN
       SELECT TOP(1) @id = id, ...
       FROM  #tmp
       WHERE  done = 0
       ORDER BY id 
    
       --  Do stuff
    
       UPDATE #tmp
       SET done = 1
       WHERE id = @id
    END
    

    And the temp is entirely un-indexed, so all these operations require scans. It just takes a couple of thousands of rows to make this very costly.

    Assuming that your code is running loops, a very good tool to troubleshoot this is sp_sqltrace, originally written by Lee Tudor, and which I am happy to host on my web site. You can simply say:

    EXEC sp_sqltrace 'EXEC slow_sp', @order = 'Duration'
    

    And it will start a trace, filtered for your spid, capture sp_statement_completed. When slow_sp has completed, it will analyse the trace, and if the same statement is executed multiple times, the data for those executions will be aggregated into a single row. The statement(s) that gets to the top is one you should look into. Beware that if the procedure runs for more than five minutes, you need to adjust the @trace_timeout parameter, because by default the trace stops after five minutes.

    Then again, maybe you don't need the data from the full execution. You can also say:

    EXEC sp_sqltrace 77, @order = 'Duration'
    

    sp_sqltrace will now set up a trace filtered for spid 77 (or the spid where the slow procedure is running), and run that trace for 10 seconds (adjustable with the parameter @snoop_time) and then perform the aggregation.

    You can also use sp_sqltrace to capture execution plans, would that be needed.

    Even if I have given you an introduction and some examples, I still recommend that you read the manual page before you start playing.


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.