On Thu, Sep 24, 2015 at 9:49 AM, Tomas Vondra
<tomas.vondra@2ndquadrant.com> wrote:
> So while it does not introduce behavior change in this particular case
> (because it fails, as you point out), it introduces a behavior change in
> general - it simply triggers behavior that does not happen below the limit.
> Would we accept the change if the proposed limit was 256MB, for example?
So, I'm a huge fan of arbitrary limits.
That's probably the single thing I'll say this year that sounds most
like a troll, but it isn't. I really, honestly believe that.
Doubling things is very sensible when they are small, but at some
point it ceases to be sensible. The fact that we can't set a
black-and-white threshold as to when we've crossed over that line
doesn't mean that there is no line. We can't imagine that the
occasional 32GB allocation when 4GB would have been optimal is no more
problematic than the occasional 32MB allocation when 4MB would have
been optimal. Where exactly to put the divider is subjective, but
"what palloc will take" is not an obviously unreasonable barometer.
Of course, if we can postpone sizing the hash table until after the
input size is known, as you suggest, then that would be better still
(but not back-patch material).
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company