Decorrelating Subqueries
In my previous post, we saw some examples where the optimizer is able to take a query with a correlated subquery and rewrite it as a join. For instance, we saw how this simple “in” subquery:
create table T1 (a int, b int)
create table T2 (a int, b int, c int)
select *
from T1
where T1.a in (select T2.a from T2 where T2.b = T1.b)
Can be rewritten as a (left semi-join) join:
|--Nested Loops(Left Semi Join, WHERE:([T2].[b]=[T1].[b] AND [T1].[a]=[T2].[a]))
|--Table Scan(OBJECT:([T1]))
|--Table Scan(OBJECT:([T2]))
Notice how there is no correlation between the two table scans. We can scan either table independently and use any join algorithm that we want. We call this optimization strategy decorrelation. We cannot decorrelate (or remove the correlation from) all subqueries. For example, as we saw last week, simply replacing the “in” subquery with a scalar “=” subquery (in the absence of a unique index) forces the optimizer to add an assert operator to the plan and prevents us from decorrelating the subquery. Even when we can decorrelate a subquery, we do not necessarily get a better plan, but we can consider more plan choices and we might find a better plan.
Decorrelating Aggregation Subqueries
Recall the following query from my last post:
select *
from T1
where T1.a > (select max(T2.a) from T2 where T2.b < T1.b)
|--Filter(WHERE:([T1].[a]>[Expr1008]))
|--Nested Loops(Inner Join, OUTER REFERENCES:([T1].[b]))
|--Table Scan(OBJECT:([T1]))
|--Stream Aggregate(DEFINE:([Expr1008]=MAX([T2].[a])))
|--Table Scan(OBJECT:([T2]), WHERE:([T2].[b]<[T1].[b]))
We are unable to decorrelate this subquery. Now consider an almost identical query, but replace the “<” in the subquery where clause with an “=”:
select *
from T1
where T1.a > (select max(T2.a) from T2 where T2.b = T1.b)
This seemingly minor change enables us to remove the correlation and write this query as a regular join:
select T1.*
from T1, (select T2.b, max(T2.a) max_a from T2 group by T2.b) S
where T1.b = S.b and T1.a > S.max_a
Both queries choose this query plan:
|--Nested Loops(Inner Join, OUTER REFERENCES:([T2].[b], [Expr1008]))
|--Stream Aggregate(GROUP BY:([T2].[b]) DEFINE:([Expr1008]=MAX([T2].[a])))
| |--Sort(ORDER BY:([T2].[b] ASC))
| |--Table Scan(OBJECT:([T2]))
|--Index Spool(SEEK:([T1].[b]=[T2].[b] AND [T1].[a] > [Expr1008]))
|--Table Scan(OBJECT:([T1]))
While the original plan executes the subquery once for each row of T1, this plan works by effectively pre-computing all possible results for the subquery and joining that result with T2. To compute all possible subquery results, we add a group by clause to the subquery on the join key. (The index spool operator creates an index on the fly to improve the performance of the join. Instead of a nested loops join with a table scan on the inner side, we have an index nested loops join with an index seek on the inner side.)
So, when is the new plan better? If we have just a few unique values of T2.b and, thus, just a few groups to compute and if we have many rows of T1, the new plan is very effective. On the other hand, if we have many unique values of T2.b and just a few rows in T1, we might be better off computing only those aggregations that we need:
set nocount on
declare @i int
set @i = 0
while @i < 10000
begin
insert T2 values(@i, @i, @i)
set @i = @i + 1
end
select *
from T1
where T1.a > (select max(T2.a) from T2 where T2.b = T1.b)
|--Filter(WHERE:([T1].[a]>[Expr1008]))
|--Stream Aggregate(GROUP BY:([Bmk1000]) DEFINE:([Expr1008]=MAX([T2].[a]), [T1].[a]=ANY([T1].[a]), [T1].[b]=ANY([T1].[b])))
|--Nested Loops(Inner Join, WHERE:([T1].[b]=[T2].[b]))
|--Table Scan(OBJECT:([T1]))
|--Table Scan(OBJECT:([T2]))
This plan is basically the same as the original plan (with the correlation) except that we have pulled the aggregation above the join.
Note that, with the aggregation above the join, we group by [Bmk1000]. This is the unique relational key for T1. You can tell that [Bmk1000] is the key for T1 by checking the defined values column of showplan_all output. We must group on the key for T1 since each row of T1 may join with multiple rows of T2 yet we want to compute MAX(T2.a) for each individual row of T1.
We are able to use a stream aggregate without sorting since the nested loops join outputs all rows of T2 that join with a given row of T1 before moving to the next row of T1. In other words, the results of the join are already “grouped” by T1 even though they are not sorted on T1.
If T1 has a clustered index or any unique index, we will use that key in place of the bookmark [Bmk1000] column. In fact, if we create a unique key on T1, we can rewrite the query as:
create unique clustered index T1a on T1(a)
select T1.a, min(T1.b) as T1b
from T1 join T2 on T1.b = T2.b
group by T1.a
having T1.a > max(T2.a)
The plan is basically the same:
|--Filter(WHERE:([T1].[a]>[Expr1007]))
|--Stream Aggregate(GROUP BY:([T1].[a]) DEFINE:([Expr1007]=MAX([T2].[a]), [Expr1008]=MIN([T1].[b])))
|--Nested Loops(Inner Join, WHERE:([T2].[b]=[T1].[b]))
|--Clustered Index Scan(OBJECT:([T1].[T1a]))
|--Table Scan(OBJECT:([T2]))
Finally, if we have many rows in both tables, the size of the join grows and the optimizer may opt for yet another alternative:
set nocount on
declare @i int
set @i = 0
while @i < 10000
begin
insert T1 values(@i, @i)
set @i = @i + 1
end
select *
from T1
where T1.a > (select max(T2.a) from T2 where T2.b = T1.b)
|--Hash Match(Inner Join, HASH:([T1].[b])=([T2].[b]), RESIDUAL:([T1].[b]=[T2].[b] AND [T1].[a]>[Expr1007]))
|--Clustered Index Scan(OBJECT:([T1].[T1a]))
|--Hash Match(Aggregate, HASH:([T2].[b]), RESIDUAL:([T2].[b] = [T2].[b]) DEFINE:([Expr1007]=MAX([T2].[a])))
|--Table Scan(OBJECT:([T2]))
This plan is essentially the same as the original decorrelated plan except that we are using hash aggregate and hash join instead of stream aggregate and nested loops join.
As you can see, decorrelating a subquery can open up a wealth of plan alternatives.
Comments
- Anonymous
June 27, 2008
Consider the following query: CREATE TABLE T1 (A INT, B1 INT, B2 INT) CREATE TABLE T2 (A INT, B INT)