Re: DBT-3 with SF=20 got failed - Mailing list pgsql-hackers

From David Rowley
Subject Re: DBT-3 with SF=20 got failed
Date
Msg-id CAKJS1f9V2QNLkoT5qEXbm+Ch+SeL=JaDynTcmzPsb+fxCf+=dw@mail.gmail.com
Whole thread Raw
In response to Re: DBT-3 with SF=20 got failed  (Kohei KaiGai <kaigai@kaigai.gr.jp>)
List pgsql-hackers
On 12 June 2015 at 02:40, Kohei KaiGai <kaigai@kaigai.gr.jp> wrote:
2015-06-11 23:28 GMT+09:00 Robert Haas <robertmhaas@gmail.com>:
> On Wed, Jun 10, 2015 at 10:57 PM, Kouhei Kaigai <kaigai@ak.jp.nec.com> wrote:
>> The attached patch replaces this palloc0() by MemoryContextAllocHuge() + memset().
>> Indeed, this hash table is constructed towards the relation with nrows=119994544,
>> so, it is not strange even if hash-slot itself is larger than 1GB.
>
> You forgot to attach the patch, I think.
>
Oops, I forgot to attach indeed.

>  It looks to me like the size
> of a HashJoinTuple is going to be 16 bytes, so 1GB/16 = ~64 million.
> That's a lot of buckets, but maybe not unreasonably many if you've got
> enough memory.
>
EXPLAIN says, this Hash node takes underlying SeqScan with
119994544 (~119 million) rows, but it is much smaller than my
work_mem setting.


I've just run into this problem while running a TPC-H benchmark of 100GB, on a machine with 64GB of RAM.
When attempting to run Q21 with a work_mem of 10GB I'm getting:
 ERROR:  invalid memory alloc request size 1073741824

Setting work_mem to 1GB or less gets the query running.

I've patched the code with your patch Kohei, and it's now working.

Thought I'd better post this just in case this gets forgotten about.

Thanks

David

--
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services
 

pgsql-hackers by date:

Previous
From: Albe Laurenz
Date:
Subject: Re: Proposal: Implement failover on libpq connect level.
Next
From: Simon Riggs
Date:
Subject: Re: DBT-3 with SF=20 got failed