Re: DBT-3 with SF=20 got failed - Mailing list pgsql-hackers

From Simon Riggs
Subject Re: DBT-3 with SF=20 got failed
Date
Msg-id CANP8+jJYDFQU3-A1YG8oTRcX6zmN9cn7wJUSDRShz+pdXeUdVw@mail.gmail.com
Whole thread Raw
In response to Re: DBT-3 with SF=20 got failed  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Responses Re: DBT-3 with SF=20 got failed  (Kohei KaiGai <kaigai@kaigai.gr.jp>)
List pgsql-hackers
On 12 June 2015 at 00:29, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
 
I see two ways to fix this:

(1) enforce the 1GB limit (probably better for back-patching, if that's
    necessary)

(2) make it work with hash tables over 1GB

I'm in favor of (2) if there's a good way to do that. It seems a bit stupid not to be able to use fast hash table because there's some artificial limit. Are there any fundamental reasons not to use the MemoryContextAllocHuge fix, proposed by KaiGai-san?

If there are no objections, I will apply the patch for 2) to HEAD and backpatch to 9.5.

--
Simon Riggs                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

pgsql-hackers by date:

Previous
From: David Rowley
Date:
Subject: Re: DBT-3 with SF=20 got failed
Next
From: Kohei KaiGai
Date:
Subject: Re: DBT-3 with SF=20 got failed