I wrote:
> Oleg Bartunov <oleg@sai.msu.su> writes:
>> WHile testing 6.5 cvs to see what's the progress with capability
>> of Postgres to work with big joins I get following error messages:
> I think there are still some nasty bugs in the GEQO planner.
I have just committed some changes that fix bugs in the GEQO planner
and limit its memory usage. It should now be possible to use GEQO even
for queries that join a very large number of tables --- at least from
the standpoint of not running out of memory during planning. (It can
still take a while :-(. I think that the default GEQO parameter
settings may be configured to use too many generations, but haven't
poked at this yet.)
I have observed that the regular optimizer requires about 50MB to plan
some ten-way joins, and can exceed my system's 128MB process data limit
on some eleven-way joins. We currently have the GEQO threshold set at
11, which prevents the latter case by default --- but 50MB is a lot.
I wonder whether we shouldn't back the GEQO threshold off to 10.
(When I suggested setting it to 11, I was only looking at speed relative
to GEQO, not memory usage. There is now a *big* difference in memory
usage...) Comments?
regards, tom lane