On 22 December 2017 at 03:37, Tom Lane <tgl@sss.pgh.pa.us> wrote: > David Rowley <david.rowley@2ndquadrant.com> writes: >> just the number of combinations to try could end up growing >> very large > > Yeah, I'm pretty doubtful that the potential improvement would be > worth the extra planner cycles in most cases. Maybe if there are > just two or three GROUP BY columns, it'd be OK to consider all the > combinations, but it could get out of hand very quickly.
Thinking a bit more about this, it would be pretty silly to go and try random combinations of columns or all combinations up to a certain level. It would be much smarter to look for a btree index that has all of the GROUP BY columns as leading keys and use that column order instead.
In my example, that wouldn't work because the leading column in the index was not part of the group by, but rather an equality-to-a-literal restriction. The group-by columns were immediately after that leading column. But it does seem like there must be a more efficient way than permuting the columns.
I didn't realize how much I could have simplified the example and still see the issue.