Re: DBT-3 with SF=20 got failed - Mailing list pgsql-hackers

From Kohei KaiGai
Subject Re: DBT-3 with SF=20 got failed
Date
Msg-id CADyhKSV6bVMkGZEi_rkgE0ugYJRPKOax169KPmx9H4SsCZQhkw@mail.gmail.com
Whole thread Raw
In response to Re: DBT-3 with SF=20 got failed  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: DBT-3 with SF=20 got failed  (David Rowley <david.rowley@2ndquadrant.com>)
Re: DBT-3 with SF=20 got failed  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
List pgsql-hackers
2015-06-11 23:28 GMT+09:00 Robert Haas <robertmhaas@gmail.com>:
> On Wed, Jun 10, 2015 at 10:57 PM, Kouhei Kaigai <kaigai@ak.jp.nec.com> wrote:
>> The attached patch replaces this palloc0() by MemoryContextAllocHuge() + memset().
>> Indeed, this hash table is constructed towards the relation with nrows=119994544,
>> so, it is not strange even if hash-slot itself is larger than 1GB.
>
> You forgot to attach the patch, I think.
>
Oops, I forgot to attach indeed.

>  It looks to me like the size
> of a HashJoinTuple is going to be 16 bytes, so 1GB/16 = ~64 million.
> That's a lot of buckets, but maybe not unreasonably many if you've got
> enough memory.
>
EXPLAIN says, this Hash node takes underlying SeqScan with
119994544 (~119 million) rows, but it is much smaller than my
work_mem setting.

--
KaiGai Kohei <kaigai@kaigai.gr.jp>

Attachment

pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: The Future of Aggregation
Next
From: Alvaro Herrera
Date:
Subject: Re: skipping pg_log in basebackup (was Re: pg_basebackup and pg_stat_tmp directory)