Re: DBT-3 with SF=20 got failed - Mailing list pgsql-hackers

From Simon Riggs
Subject Re: DBT-3 with SF=20 got failed
Date
Msg-id CANP8+jK7FefF9fdqUxv8sgJH-iKynPHGGA-bTSAPeLZi9VCVrg@mail.gmail.com
Whole thread Raw
In response to Re: DBT-3 with SF=20 got failed  (Kohei KaiGai <kaigai@kaigai.gr.jp>)
Responses Re: DBT-3 with SF=20 got failed  (Kohei KaiGai <kaigai@kaigai.gr.jp>)
Re: DBT-3 with SF=20 got failed  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On 19 August 2015 at 12:55, Kohei KaiGai <kaigai@kaigai.gr.jp> wrote:
2015-08-19 20:12 GMT+09:00 Simon Riggs <simon@2ndquadrant.com>:
> On 12 June 2015 at 00:29, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
>
>>
>> I see two ways to fix this:
>>
>> (1) enforce the 1GB limit (probably better for back-patching, if that's
>>     necessary)
>>
>> (2) make it work with hash tables over 1GB
>>
>> I'm in favor of (2) if there's a good way to do that. It seems a bit
>> stupid not to be able to use fast hash table because there's some artificial
>> limit. Are there any fundamental reasons not to use the
>> MemoryContextAllocHuge fix, proposed by KaiGai-san?
>
>
> If there are no objections, I will apply the patch for 2) to HEAD and
> backpatch to 9.5.
>
Please don't be rush. :-)

Please explain what rush you see?
 
It is not difficult to replace palloc() by palloc_huge(), however, it may lead
another problem once planner gives us a crazy estimation.
Below is my comment on the another thread.

 Yes, I can read both threads and would apply patches for each problem.

--
Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

pgsql-hackers by date:

Previous
From: Simon Riggs
Date:
Subject: Re: Declarative partitioning
Next
From: Ashutosh Bapat
Date:
Subject: Re: postgres_fdw join pushdown (was Re: Custom/Foreign-Join-APIs)