Thread: Order of columns in GROUP BY is significant to the planner.
I was recently surprised to discover that the planner finds that the order of columns in the GROUP BY to be significant. I've found it in 9.6.6. and verified it is still like that in v11.
To set up the example:
create table foo as select floor(random()*100)::int as col1, floor(random()*100)::int as col2, floor(random()*10)::int as col3, random() as col4, random() as col5 from generate_series(1,100000000);
vacuum analyze foo;
create index on foo (col3, col1, col2, col5, col4);
set enable_hashagg TO off;
set max_parallel_workers_per_ gather TO 0;
EXPLAIN (BUFFERS, ANALYZE, timing off) SELECT SUM(col5), col2, col1 FROM foo
WHERE col3 = '4' AND col4 <= '0.9' GROUP BY col2,col1;
EXPLAIN (BUFFERS, ANALYZE, timing off) SELECT SUM(col5), col2, col1 FROM foo
WHERE col3 = '4' AND col4 <= '0.9' GROUP BY col1,col2;
The first one inserts a sort node on col1,col2 before doing the Group Aggregate. The second one uses the ordering of the tuples derived from the index scan to do the Group Aggregate directly. Isn't it surprising that the order of the columns in the GROUP BY has to be same as the order in the index definition in order to make maximal use of the index? Is that a bug?
Cheers,
Jeff
On 21 December 2017 at 19:16, Jeff Janes <jeff.janes@gmail.com> wrote: > EXPLAIN (BUFFERS, ANALYZE, timing off) SELECT SUM(col5), col2, col1 FROM foo > WHERE col3 = '4' AND col4 <= '0.9' GROUP BY col2,col1; > > EXPLAIN (BUFFERS, ANALYZE, timing off) SELECT SUM(col5), col2, col1 FROM foo > WHERE col3 = '4' AND col4 <= '0.9' GROUP BY col1,col2; > > The first one inserts a sort node on col1,col2 before doing the Group > Aggregate. The second one uses the ordering of the tuples derived from the > index scan to do the Group Aggregate directly. Isn't it surprising that the > order of the columns in the GROUP BY has to be same as the order in the > index definition in order to make maximal use of the index? Is that a bug? Not a bug, just the number of combinations to try could end up growing very large and then you'd likely want or need to re-perform the join search with each order and keep the cheapest one. Likely it would just be too slow, especially when there are many tables in the join search and many columns in the GROUP BY. There's a comment at the top of preprocess_groupclause() that explains that we don't do it, it just does not explain why we don't. It really just mentions that hash agg is probably better in most cases, which seems like a bit of a cop-out. It likely should just explain that we err on the side of caution as the planning effort may often outweigh the benefits we get during execution. -- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
David Rowley <david.rowley@2ndquadrant.com> writes: > On 21 December 2017 at 19:16, Jeff Janes <jeff.janes@gmail.com> wrote: >> The first one inserts a sort node on col1,col2 before doing the Group >> Aggregate. The second one uses the ordering of the tuples derived from the >> index scan to do the Group Aggregate directly. Isn't it surprising that the >> order of the columns in the GROUP BY has to be same as the order in the >> index definition in order to make maximal use of the index? Is that a bug? > Not a bug, No, although you could argue that it's a hangover from back before we had hash-based grouping. If you look into the parser's processing of GROUP BY clauses you'll find out that the semantics of GROUP BY are more tightly tied to ORDER BY than you might have expected -- specifically, the operator class involved, and therefore potentially the definition of equality, can be absorbed from ORDER BY if the column lists match. So historically the planner treated GROUP BY + ORDER BY as largely one thing, and there was no value in considering grouping with a different column ordering. > just the number of combinations to try could end up growing > very large Yeah, I'm pretty doubtful that the potential improvement would be worth the extra planner cycles in most cases. Maybe if there are just two or three GROUP BY columns, it'd be OK to consider all the combinations, but it could get out of hand very quickly. regards, tom lane
On 22 December 2017 at 03:37, Tom Lane <tgl@sss.pgh.pa.us> wrote: > David Rowley <david.rowley@2ndquadrant.com> writes: >> just the number of combinations to try could end up growing >> very large > > Yeah, I'm pretty doubtful that the potential improvement would be > worth the extra planner cycles in most cases. Maybe if there are > just two or three GROUP BY columns, it'd be OK to consider all the > combinations, but it could get out of hand very quickly. Thinking a bit more about this, it would be pretty silly to go and try random combinations of columns or all combinations up to a certain level. It would be much smarter to look for a btree index that has all of the GROUP BY columns as leading keys and use that column order instead. Perhaps it could be just changed to that regardless unless there's also an ORDER BY in the query. Nothing would need to be touched if there was only 1 GROUP BY expr, and we probably couldn't do anything if the GROUP BY contains Vars/Exprs from multiple relations. Of course, it needs more thought than just the above, but it seems like an idea that might be workable. -- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
On Wed, Dec 27, 2017 at 8:46 PM, David Rowley <david.rowley@2ndquadrant.com> wrote:
Cheers,
On 22 December 2017 at 03:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> David Rowley <david.rowley@2ndquadrant.com> writes:
>> just the number of combinations to try could end up growing
>> very large
>
> Yeah, I'm pretty doubtful that the potential improvement would be
> worth the extra planner cycles in most cases. Maybe if there are
> just two or three GROUP BY columns, it'd be OK to consider all the
> combinations, but it could get out of hand very quickly.
Thinking a bit more about this, it would be pretty silly to go and try
random combinations of columns or all combinations up to a certain
level. It would be much smarter to look for a btree index that has all
of the GROUP BY columns as leading keys and use that column order
instead.
In my example, that wouldn't work because the leading column in the index was not part of the group by, but rather an equality-to-a-literal restriction. The group-by columns were immediately after that leading column. But it does seem like there must be a more efficient way than permuting the columns.
I didn't realize how much I could have simplified the example and still see the issue.
Jeff