Re: Hash Aggregate plan picked for very large table == out of memory - Mailing list pgsql-general

From Tom Lane
Subject Re: Hash Aggregate plan picked for very large table == out of memory
Date
Msg-id 20937.1181857220@sss.pgh.pa.us
Whole thread Raw
In response to Re: Hash Aggregate plan picked for very large table == out of memory  ("Mason Hale" <masonhale@gmail.com>)
List pgsql-general
"Mason Hale" <masonhale@gmail.com> writes:
>> What's the actual number of groups
>> (target_page_id values)?

> Approximately 40 million (I'll have a more precise number when the query
> finishes running ).

Ouch.  The discrepancy between that and the 550K estimate is what's
killing you --- the hash table will be 80 times bigger than the planner
thinks.  You sure that increasing the stats target doesn't improve
matters?   (I suppose the distribution is one of these "long tail"
affairs...)

The best solution might be to reduce work_mem --- 40MB is fairly high
for a general-purpose setting anyway IMO, since you need to worry about
complex queries eating multiples of work_mem.

            regards, tom lane

pgsql-general by date:

Previous
From: "Mason Hale"
Date:
Subject: Re: Hash Aggregate plan picked for very large table == out of memory
Next
From: Francisco Reyes
Date:
Subject: Re: pg_restore out of memory