Re: DBT-3 with SF=20 got failed - Mailing list pgsql-hackers

From Tom Lane
Subject Re: DBT-3 with SF=20 got failed
Date
Msg-id 30970.1439992416@sss.pgh.pa.us
Whole thread Raw
In response to Re: DBT-3 with SF=20 got failed  (Simon Riggs <simon@2ndQuadrant.com>)
Responses Re: DBT-3 with SF=20 got failed  (Simon Riggs <simon@2ndQuadrant.com>)
Re: DBT-3 with SF=20 got failed  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
List pgsql-hackers
Simon Riggs <simon@2ndQuadrant.com> writes:
> On 19 August 2015 at 12:55, Kohei KaiGai <kaigai@kaigai.gr.jp> wrote:
>> Please don't be rush. :-)

> Please explain what rush you see?

Yours.  You appear to be in a hurry to apply patches that there's no
consensus on.

>> It is not difficult to replace palloc() by palloc_huge(), however, it
>> may lead another problem once planner gives us a crazy estimation.
>> Below is my comment on the another thread.

>  Yes, I can read both threads and would apply patches for each problem.

I don't see anything very wrong with constraining the initial allocation
to 1GB, or even less.  That will prevent consuming insane amounts of
work_mem when the planner's rows estimate is too high rather than too low.
And we do have the ability to increase the hash table size on the fly.

The real problem here is the potential integer overflow in
ExecChooseHashTableSize.  Now that we know there is one, that should be
fixed (and not only in HEAD/9.5).  But I believe Kaigai-san withdrew his
initial proposed patch, and we don't have a replacement as far as I saw.
        regards, tom lane



pgsql-hackers by date:

Previous
From: Andres Freund
Date:
Subject: Re: Proposal: Implement failover on libpq connect level.
Next
From: Andres Freund
Date:
Subject: Re: Make HeapTupleSatisfiesMVCC more concurrent