Re: Hash Aggregate plan picked for very large table == out of memory - Mailing list pgsql-general

From Gregory Stark
Subject Re: Hash Aggregate plan picked for very large table == out of memory
Date
Msg-id 873b0ut9j8.fsf@oxford.xeocode.com
Whole thread Raw
In response to Hash Aggregate plan picked for very large table == out of memory  ("Mason Hale" <masonhale@gmail.com>)
List pgsql-general
"Mason Hale" <masonhale@gmail.com> writes:

> The default_statistics_target was originally 200.
> I upped it to 1000 and still get the same results.

You did analyze the table after upping the target right? Actually I would
expect you would be better off not raising it so high globally and just
raising it for this one table with

    ALTER [ COLUMN ] column SET STATISTICS integer

> I am working around this by setting enable_hashagg = off  -- but it just
> seems like a case where the planner is not picking the strategy?

Sadly guessing the number of distinct values from a sample is actually a
pretty hard problem. How many distinct values do you get when you run with
enable_hashagg off?

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com


pgsql-general by date:

Previous
From: Francisco Reyes
Date:
Subject: Re: pg_restore out of memory
Next
From: Gregory Stark
Date:
Subject: Re: pg_restore out of memory