Index creation plan estimates

ACDBA 416 Reputation points

Hi All,

Hope everyone is safe.

I have a SQL server 2017 database with a compatibility level set to 130(one version below server)

I have a partitioned table that has around 100 million records. A clustered index is already there in it. While I create NCI it keeps CPU to 100% for almost 2 hours and brings the server to the knee.

There are no included columns for NCI. it's a 128GB 16 CPU VM.

Any idea why it's so slow and consumes most CPU? waits seem to be memory\CPU. Will there be any difference if we switch compatibility level to 140?

The execution plan and the details are attached.




SQL Server
SQL Server
A family of Microsoft relational database management and analysis systems for e-commerce, line-of-business, and data warehousing solutions.
11,028 questions
4,388 questions
{count} votes

3 answers

Sort by: Most helpful
  1. Patrick C 81 Reputation points

    You could use this to track the progress and give you an estimate on the completion time.
    From you plan it looks like the sorting consumes most of the CPU
    Also I might be obvious but try to do this when the there is either no or minimal load ...

    Basically you check set SET STATISTICS PROFILE ON; in the where you will create run the CREATE INDEX script and note the SPID and enter in in the in the query below in another tab.

    This will help you to understand if after 2 hours you were at 99% or still at 10%.

    remember to set
    in index creation script


    ;WITH agg AS
    SELECT SUM(qp.[row_count]) AS [RowsProcessed],
    SUM(qp.[estimate_row_count]) AS [TotalRows],
    MAX(qp.last_active_time) - MIN(qp.first_active_time) AS [ElapsedMS],
    MAX(IIF(qp.[close_time] = 0 AND qp.[first_row_time] > 0,
    N'<Transition>')) AS [CurrentStep]
    FROM sys.dm_exec_query_profiles qp
    WHERE qp.[physical_operator_name] IN (N'Table Scan', N'Clustered Index Scan', N'Sort')
    AND qp.[session_id] = @SPID
    ), comp AS
    SELECT *,
    ([TotalRows] - [RowsProcessed]) AS [RowsLeft],
    ([ElapsedMS] / 1000.0) AS [ElapsedSeconds]
    FROM agg
    SELECT [CurrentStep],
    (([RowsProcessed] * 1.0) / [TotalRows]) * 100) AS [PercentComplete],
    (([ElapsedSeconds] / [RowsProcessed]) * [RowsLeft]) AS [EstimatedSecondsLeft],
    (([ElapsedSeconds] / [RowsProcessed]) * [RowsLeft]),
    GETUTCDATE()) AS [EstimatedCompletionTime]
    FROM comp;

    see this for reference:

    0 comments No comments

  2. Erland Sommarskog 88,856 Reputation points

    Will there be any difference if we switch compatibility level to 140?

    I would not really expect so, but the only way to find out is to test.

    Two hours a wee bit long time, but there is a lot of data to be processed and to be read and written. The bottleneck could very well be in your I/O subsystem which gets overtaxed.

    You could sample sys.dm_io_virtual_file_stats while the operation is running and compute the average response time for the I/O requests to get an idea of your I/O subsystem is performing.

    Also, you said VM, but what sort of? Is it on-prem or in the cloud?

  3. Dan Guzman 7,626 Reputation points

    Like Erland, I would not expect the compatibility level to make a difference in the index create performance. I ran a test on my workstation to verify and observed no performance difference with compatibility level 130 vs. 140. It took a little over one minute to create a partitioned single column nonclustered rowstore index on a 550 million row clustered columnstore partitioned table. This was using a Hyper-V VM (SQL 2017 CU28, 64GB RAM, 16 core).

    There may be particulars of your index/partitioning that might contribute to the high elapsed time but I suggest you first rule out VM infrastructure that isn't providing sufficient CPU cycles. 2 hours seems quite high. Also, consider applying the latest CU if you have not already done so.