Re: why postgresql define NTUP_PER_BUCKET as 10, not other numbers smaller - Mailing list pgsql-hackers

From Robert Haas
Subject Re: why postgresql define NTUP_PER_BUCKET as 10, not other numbers smaller
Date
Msg-id CA+TgmoaUZx0pofP3HiMZ+cM=W5CGNif0csFrY2xjtLcvcd3uaw@mail.gmail.com
Whole thread Raw
In response to Re: why postgresql define NTUP_PER_BUCKET as 10, not other numbers smaller  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: why postgresql define NTUP_PER_BUCKET as 10, not other numbers smaller  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: why postgresql define NTUP_PER_BUCKET as 10, not other numbers smaller  (Stephen Frost <sfrost@snowman.net>)
List pgsql-hackers
On Tue, Jun 10, 2014 at 10:27 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Well, it's all in what scenario you test, right?  If you test the case
>> where something overflows work_mem as a result of the increased size
>> of the bucket array, it's always going to suck.  And if you test the
>> case where that doesn't happen, it's likely to win.  I think Stephen
>> Frost has already done quite a bit of testing in this area, on
>> previous threads.  But there's no one-size-fits-all solution.
>
> I don't really recall any hard numbers being provided.  I think if we
> looked at some results that said "here's the average gain, and here's
> the worst-case loss, and here's an estimate of how often you'd hit
> the worst case", then we could make a decision.

The worst case loss is that you have to rescan the entire inner
relation, so it's pretty darned bad.  I'm not sure how to construct an
optimal worst case fot that being monumentally expensive, but making
the inner relation gigantic is probably a good start.

> However, I notice that it's already the case that we make a
> to-batch-or-not-to-batch decision on the strength of some very crude
> numbers during ExecChooseHashTableSize, and we explicitly don't consider
> palloc overhead there.  It would certainly be easy enough to use two
> different NTUP_PER_BUCKET target load factors depending on which path
> is being taken in ExecChooseHashTableSize.  So maybe part of the answer is
> to not require those numbers to be the same.

If we could allow NTUP_PER_BUCKET to drop when the hashtable is
expected to fit in memory either way, perhaps with some safety margin
(e.g. we expect to use less than 75% of work_mem), I bet that would
make the people who have been complaining about this issue happy.  And
probably a few people who haven't been complaining, too: I don't
recall the precise performance numbers that were posted before, but
ISTR the difference between 10 and 1 was pretty dramatic.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: updated emacs configuration
Next
From: Tom Lane
Date:
Subject: Re: "cancelling statement due to user request error" occurs but the transaction has committed.