Hi hackers,
I came across a problem on how to improve the performance of queries with GROUP BY clause when the grouping columns have much duplicate data. For example:
create table t1(i1) as select 1 from generate_series(1,10000);
create table t2(i2) as select 2 from generate_series(1,10000);
select i1,i2 from t1, t2 group by i1,i2;
i1 | i2
----+----
1 | 2
QUERY PLAN
-----------------------------------
HashAggregate
Group Key: t1.i1, t2.i2
Batches: 1 Memory Usage: 24kB
-> Nested Loop
-> Seq Scan on t1
-> Materialize
-> Seq Scan on t2
Planning Time: 0.067 ms
Execution Time: 15864.585 ms
The plan is apparently inefficient, since the hash aggregate goes after the Cartesian product. We could expect the query's performance get much improved if the HashAggregate node can be pushed down to the SCAN node. For example, the plan may looks like:
expected QUERY PLAN
----------------------------------------
Group
Group Key: t1.i1, t2.i2
-> Sort
Sort Key: t1.i1, t2.i2
-> Nested Loop
-> HashAggregate
Group Key: t1.i1
-> Seq Scan on t1
-> HashAggregate
Group Key: t2.i2
-> Seq Scan on t2
Moreover, queries with expressions as GROUP BY columns may also take advantage of this feature, e.g.
select i1+i2 from t1, t2 group by i1+i2;
?column?
----------
3
expected QUERY PLAN
----------------------------------------
Group
Group Key: ((t1.i1 + t2.i2))
-> Sort
Sort Key: ((t1.i1 + t2.i2))
-> Nested Loop
-> HashAggregate
Group Key: t1.i1
-> Seq Scan on t1
-> HashAggregate
Group Key: t2.i2
-> Seq Scan on t2
Is someone has suggestions on this?