Re: DBT-3 with SF=20 got failed - Mailing list pgsql-hackers

From Robert Haas
Subject Re: DBT-3 with SF=20 got failed
Date
Msg-id CA+TgmoZFz1wqVjWPR9FcvWwbLdKtVZg+w8sZX61c=CJYFeBJsw@mail.gmail.com
Whole thread Raw
In response to Re: DBT-3 with SF=20 got failed  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Responses Re: DBT-3 with SF=20 got failed
Re: DBT-3 with SF=20 got failed
List pgsql-hackers
On Thu, Sep 24, 2015 at 9:49 AM, Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:
> So while it does not introduce behavior change in this particular case
> (because it fails, as you point out), it introduces a behavior change in
> general - it simply triggers behavior that does not happen below the limit.
> Would we accept the change if the proposed limit was 256MB, for example?

So, I'm a huge fan of arbitrary limits.

That's probably the single thing I'll say this year that sounds most
like a troll, but it isn't.  I really, honestly believe that.
Doubling things is very sensible when they are small, but at some
point it ceases to be sensible.  The fact that we can't set a
black-and-white threshold as to when we've crossed over that line
doesn't mean that there is no line.  We can't imagine that the
occasional 32GB allocation when 4GB would have been optimal is no more
problematic than the occasional 32MB allocation when 4MB would have
been optimal.  Where exactly to put the divider is subjective, but
"what palloc will take" is not an obviously unreasonable barometer.

Of course, if we can postpone sizing the hash table until after the
input size is known, as you suggest, then that would be better still
(but not back-patch material).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Andrew Dunstan
Date:
Subject: Re: No Issue Tracker - Say it Ain't So!
Next
From: Tom Lane
Date:
Subject: Re: DBT-3 with SF=20 got failed