Re: BUG #2225: Backend crash -- BIG table - Mailing list pgsql-bugs

From Tom Lane
Subject Re: BUG #2225: Backend crash -- BIG table
Date
Msg-id 27525.1138988576@sss.pgh.pa.us
Whole thread Raw
In response to Re: BUG #2225: Backend crash -- BIG table  (Stephan Szabo <sszabo@megazone.bigpanda.com>)
Responses Re: BUG #2225: Backend crash -- BIG table
List pgsql-bugs
Stephan Szabo <sszabo@megazone.bigpanda.com> writes:
>> pointspp=# explain select trid, count(*) from pptran group by trid
>> having count(*) > 1;
>> QUERY PLAN
>> --------------------------------------------------------------------------
>> HashAggregate  (cost=1311899.28..1311902.78 rows=200 width=18)
>> Filter: (count(*) > 1)
>> ->  Seq Scan on pptran  (cost=0.00..1039731.02 rows=36289102 width=18)
>> (3 rows)

>>> Failing that, how many rows should the above return?

>> That is exactly what I am trying to find out. I can only guess that, but
>> it should not be more than a couple of 10k rows.

The problem is that the HashAgg will have to maintain a counter for
every distinct value of trid, not just those that occur more than
once.  So if there are a huge number of one-time-only values you could
still blow out memory (and HashAgg doesn't currently know how to spill
to disk).

That "rows=200" estimate looks suspiciously like a default.  Has this
table been ANALYZEd recently?  I'd expect the planner not to choose
HashAgg if it has a more realistic estimate of the number of groups.

            regards, tom lane

pgsql-bugs by date:

Previous
From: Stephan Szabo
Date:
Subject: Re: BUG #2225: Backend crash -- BIG table
Next
From: Tom Lane
Date:
Subject: Re: BUG #2225: Backend crash -- BIG table