Re: *_collapse_limit, geqo_threshold - Mailing list pgsql-hackers

From marcin mank
Subject Re: *_collapse_limit, geqo_threshold
Date
Msg-id b1b9fac60907140318o26cd4ed9wd0b112fcb8b0657d@mail.gmail.com
Whole thread Raw
In response to Re: *_collapse_limit, geqo_threshold  (Noah Misch <noah@leadboat.com>)
List pgsql-hackers
On Thu, Jul 9, 2009 at 5:38 AM, Noah Misch<noah@leadboat.com> wrote:
z
> Describing in those terms illuminates much.  While the concepts do suggest 2^N
> worst-case planning cost, my artificial test case showed a rigid 4^N pattern;
> what could explain that?
>

Isn`t that just so that the planner has to examine O(2^N) subsets of
relations, and do O(2^N) work for each of them? To create level N join
the planner chooses pairs of level k and level N-k joins. the count of
level k joins is O(2^k), the count of level N-k ones is O(2^(N-k)).
Together it is O(N) * O(2^N) * O(2^k) * O(2^(N-k))  which is O(N* 4^N)
.


This is for the worst case. If we could make a better estimate of the
required planning time (I believe that the input data for a good
heuristic is a matrix which says which relation is constrained to
which relation), we could make better decisions about when to flatten
subqueries, collapse joins, launch geqo...

Greetings
Marcin


pgsql-hackers by date:

Previous
From: Tsutomu Yamada
Date:
Subject: [PATCH] "could not reattach to shared memory" on Windows
Next
From: Peter Eisentraut
Date:
Subject: Re: Index-only scans