Re: bad plan: 8.4.8, hashagg, work_mem=1MB. - Mailing list pgsql-performance

From Robert Haas
Subject Re: bad plan: 8.4.8, hashagg, work_mem=1MB.
Date
Msg-id CA+TgmoZpQy+rTHjBFLKpJH_h4sGRxsJXdreFNLk_WmDHZ0gXjw@mail.gmail.com
Whole thread Raw
In response to Re: bad plan: 8.4.8, hashagg, work_mem=1MB.  (Jon Nelson <jnelson+pgsql@jamponi.net>)
List pgsql-performance
On Mon, Jun 20, 2011 at 3:31 PM, Jon Nelson <jnelson+pgsql@jamponi.net> wrote:
> On Mon, Jun 20, 2011 at 11:08 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Jon Nelson <jnelson+pgsql@jamponi.net> writes:
>>> I ran a query recently where the result was very large. The outer-most
>>> part of the query looked like this:
>>
>>>  HashAggregate  (cost=56886512.96..56886514.96 rows=200 width=30)
>>>    ->  Result  (cost=0.00..50842760.97 rows=2417500797 width=30)
>>
>>> The row count for 'Result' is in the right ballpark, but why does
>>> HashAggregate think that it can turn 2 *billion* rows of strings (an
>>> average of 30 bytes long) into only 200?
>>
>> 200 is the default assumption about number of groups when it's unable to
>> make any statistics-based estimate.  You haven't shown us any details so
>> it's hard to say more than that.
>
> What sorts of details would you like? The row count for the Result
> line is approximately correct -- the stats for all tables are up to
> date (the tables never change after import).  statistics is set at 100
> currently.

The query and the full EXPLAIN output (attached as text files) would
be a good place to start....

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

pgsql-performance by date:

Previous
From: Robert Haas
Date:
Subject: Re: Large rows number, and large objects
Next
From: Stefan Keller
Date:
Subject: Re: hstore - Implementation and performance issues around its operators