Re: Questions on query planner, join types, and work_mem - Mailing list pgsql-performance

From Tom Lane
Subject Re: Questions on query planner, join types, and work_mem
Date
Msg-id 2239.1280291967@sss.pgh.pa.us
Whole thread Raw
In response to Re: Questions on query planner, join types, and work_mem  (Alvaro Herrera <alvherre@commandprompt.com>)
List pgsql-performance
Alvaro Herrera <alvherre@commandprompt.com> writes:
> Excerpts from Tom Lane's message of mar jul 27 20:05:02 -0400 2010:
>> Well, the issue you're hitting is that the executor is dividing the
>> query into batches to keep the size of the in-memory hash table below
>> work_mem.  The planner should expect that and estimate the cost of
>> the hash technique appropriately, but seemingly it's failing to do so.

> Hmm, I wasn't aware that hash joins worked this way wrt work_mem.  Is
> this visible in the explain output?

As of 9.0, any significant difference between "Hash Batches" and
"Original Hash Batches" would be a cue that the planner blew the
estimate.  For Peter's problem, we're just going to have to look
to see if the estimated cost changes in a sane way between the
small-work_mem and large-work_mem cases.

            regards, tom lane

pgsql-performance by date:

Previous
From: Jayadevan M
Date:
Subject: Re: Questions on query planner, join types, and work_mem
Next
From: Tom Lane
Date:
Subject: Re: Pooling in Core WAS: Need help in performance tuning.