Re: cost_hashjoin - Mailing list pgsql-hackers

From Greg Stark
Subject Re: cost_hashjoin
Date
Msg-id AANLkTi=_DJZViKQ5C4n3oUDdBfPiGe=odA6BHpHPt1UN@mail.gmail.com
Whole thread Raw
In response to cost_hashjoin  (Simon Riggs <simon@2ndQuadrant.com>)
Responses Re: cost_hashjoin  (Simon Riggs <simon@2ndQuadrant.com>)
List pgsql-hackers
On Mon, Aug 30, 2010 at 10:18 AM, Simon Riggs <simon@2ndquadrant.com> wrote:
> cost_hashjoin() has some treatment of what occurs when numbatches > 1
> but that additional cost is not proportional to numbatches.

Because that's not how our hash batching works. We generate two temp
files for each batch, one for the outer and one for the inner. So if
we're batching then every tuple of both the inner and outer tables
(except for ones in the first batch) need to be written once and read
once regardless of the number of batches.

I do think the hash join implementation is a good demonstration of why
C programming is faster at a micro-optimization level but slower at a
macro level. Users of higher level languages would be much more likely
to use any of the many fancier hashing data structures developed in
the last few decades. in particular I think Cuckoo hashing would be
interesting for us.

-- 
greg


pgsql-hackers by date:

Previous
From: Fujii Masao
Date:
Subject: Re: pg_subtrans keeps bloating up in the standby
Next
From: Simon Riggs
Date:
Subject: Re: cost_hashjoin