Re: DBT-3 with SF=20 got failed - Mailing list pgsql-hackers

From Tomas Vondra
Subject Re: DBT-3 with SF=20 got failed
Date
Msg-id 55799C2D.3040803@2ndquadrant.com
Whole thread Raw
In response to Re: DBT-3 with SF=20 got failed  (Jan Wieck <jan@wi3ck.info>)
Responses Re: DBT-3 with SF=20 got failed  (Kohei KaiGai <kaigai@kaigai.gr.jp>)
Re: DBT-3 with SF=20 got failed  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
Hi,

On 06/11/15 16:20, Jan Wieck wrote:
> On 06/11/2015 09:53 AM, Kouhei Kaigai wrote:
>>> curious: what was work_mem set to?
>>>
>> work_mem=48GB
>>
>> My machine mounts 256GB physical RAM.
>
> work_mem can be allocated several times per backend. Nodes like sort
> and hash_aggregate may each allocate that much. You should set
> work_mem to a fraction of physical-RAM / concurrent-connections
> depending on the complexity of your queries. 48GB does not sound
> reasonable.

That's true, but there are cases where values like this may be useful 
(e.g. for a particular query). We do allow such work_mem values, so I 
consider this failure to be a bug.

It probably existed in the past, but was amplified by the hash join 
improvements I did for 9.5, because that uses NTUP_PER_BUCKET=1 instead 
of NTUP_PER_BUCKET=10. So the arrays of buckets are much larger, and we 
also much more memory than we had in the past.

Interestingly, the hash code checks for INT_MAX overflows on a number of 
places, but does not check for this ...

regards
--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: Entities created in one query not available in another in extended protocol
Next
From: Tom Lane
Date:
Subject: Re: The Future of Aggregation