Re: DBT-3 with SF=20 got failed - Mailing list pgsql-hackers

From Tomas Vondra
Subject Re: DBT-3 with SF=20 got failed
Date
Msg-id 55D53853.9050106@2ndquadrant.com
Whole thread Raw
In response to Re: DBT-3 with SF=20 got failed  (Kohei KaiGai <kaigai@kaigai.gr.jp>)
Responses Re: DBT-3 with SF=20 got failed  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
List pgsql-hackers
Hello KaiGain-san,

On 08/19/2015 03:19 PM, Kohei KaiGai wrote:
> Unless we have no fail-safe mechanism when planner estimated too
> large number of tuples than actual needs, a strange estimation will
> consume massive amount of RAMs. It's a bad side effect.
> My previous patch didn't pay attention to the scenario, so needs to
> revise the patch.

I agree we need to put a few more safeguards there (e.g. make sure we 
don't overflow INT when counting the buckets, which may happen with the 
amounts of work_mem we'll see in the wild soon).

But I think we should not do any extensive changes to how we size the 
hashtable - that's not something we should do in a bugfix I think.

regards

--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



pgsql-hackers by date:

Previous
From: Tomas Vondra
Date:
Subject: Re: DBT-3 with SF=20 got failed
Next
From: Amit Langote
Date:
Subject: Re: Declarative partitioning