GEQO optimizer (was Re: Backend message type 0x44 arrived while idle) - Mailing list pgsql-hackers

From Tom Lane
Subject GEQO optimizer (was Re: Backend message type 0x44 arrived while idle)
Date
Msg-id 6087.926902660@sss.pgh.pa.us
Whole thread Raw
In response to Re: [HACKERS] Backend message type 0x44 arrived while idle  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: [HACKERS] GEQO optimizer (was Re: Backend message type 0x44 arrived while idle)  (Bruce Momjian <maillist@candle.pha.pa.us>)
List pgsql-hackers
I wrote:
> Oleg Bartunov <oleg@sai.msu.su> writes:
>> WHile testing 6.5 cvs to see what's the progress with capability
>> of Postgres to work with big joins I get following error messages:

> I think there are still some nasty bugs in the GEQO planner.

I have just committed some changes that fix bugs in the GEQO planner
and limit its memory usage.  It should now be possible to use GEQO even
for queries that join a very large number of tables --- at least from
the standpoint of not running out of memory during planning.  (It can
still take a while :-(.  I think that the default GEQO parameter
settings may be configured to use too many generations, but haven't
poked at this yet.)

I have observed that the regular optimizer requires about 50MB to plan
some ten-way joins, and can exceed my system's 128MB process data limit
on some eleven-way joins.  We currently have the GEQO threshold set at
11, which prevents the latter case by default --- but 50MB is a lot.
I wonder whether we shouldn't back the GEQO threshold off to 10.
(When I suggested setting it to 11, I was only looking at speed relative
to GEQO, not memory usage.  There is now a *big* difference in memory
usage...)  Comments?
        regards, tom lane


pgsql-hackers by date:

Previous
From: Tatsuo Ishii
Date:
Subject: Re: [HACKERS] select + order by
Next
From: Bruce Momjian
Date:
Subject: Re: [HACKERS] select + order by