Partager via


Index Build strategy in SQL Server - Part 2: Offline, Parallel, No Partitioning (Non stats plan (no histogram))

Build (serial) (write data to the in-build index)

                          |

                X (Merge exchange)

                           / | \

                      Sort… Sort… Sort …(order by index key)

                           | | |

                       Scan… Scan… Scan…(read data from source)

 

When histogram is not available (for example when we building an index on a view) we can’t use the same methods as described in pervious post (For statistics gathering, it is possible only for ‘real’ object. We are building index on view, so index does not exist at this point and we are not able to gather sample stats on the view this is why we have a different plan here). So we will be using regular parallel scan which is not aware of data distribution at all.

 

How this works:

We will scan source data in parallel using parallel scan, but serial b-tree build.

Each worker scans a page from the heap the same way as in a previous parallel scan method. When scan is done sort table gets created and populated for each worker. Each worker maintains its own sort table - one sort table and the data from all workers will be merged (as these sort tables are not disjoint we can not just build separate b-tree’s and ‘stitch’ them; we have to merge sort tables) and produced to a final sorted output. After that, index build operation will be serial as we have one final output. The index builder consumes data from Merge Exchange and builds the final index.

Why is this plan relatively slow? We are building index in serial plus some extra overhead introduced by ‘merge exchange’.

 

 

Memory consideration:

            In parallel index build, we are building multiple sort tables concurrently hence the basic memory requirement is higher and the calculation is a bit different. For memory calculation, we have 1) required memory, 2) additional memory. Each sort requires 40 pages of required memory. Let’s say we have DOP = 2 so we have 2 sort tables, we need 80 pages of required memory, but the total additional memory remains the same regardless DOP setting, this is because the total number of rows remain the same regardless DOP setting. For example, if serial plan needs 500 pages of additional memory, then parallel plan has the same request for additional memory, each worker will get 500/DOP pages of additional memory + 40 pages of required memory.