Re: Postgres12 looking for possible HashAggregate issue workarounds? - Mailing list pgsql-performance

From Justin Pryzby
Subject Re: Postgres12 looking for possible HashAggregate issue workarounds?
Date
Msg-id 20221216160655.GQ1153@telsasoft.com
Whole thread Raw
In response to Postgres12 looking for possible HashAggregate issue workarounds?  (João Paulo Luís <joao.luis@pdmfc.com>)
Responses RE: Postgres12 looking for possible HashAggregate issue workarounds?
List pgsql-performance
On Fri, Dec 16, 2022 at 03:24:17PM +0000, João Paulo Luís wrote:
> Hi! Sorry to post to this mailing list, but I could not find many tips working around HashAggregate issues.
> 
> In a research project involving text repetition analysis (on top of public documents)
> I have a VirtualMachine (CPU AMD Epyc 7502P, 128GB RAM, 12TB HDD, 2TB SSD),
> running postgres 12.12 (Ubuntu 12.12-0ubuntu0.20.04.1)
> and some tables with many rows:

> 1 - the query is making a postgresql project have 76.7 GB resident RAM usage.
> Having a WORK_MEM setting of 2GB (and "simple" COUNT() results),
> that was not expected.
> (I risk oom-killer killing my postgres as soon as I run another concurrent
> query.)

> The rows=261275 on HashAggregate  (cost=26397219.92..26399832.67 rows=261275 width=8) seems VERY WRONG!
> I was expecting something like rows=1.0E+09 instead.

> I would guess that HashAggregate is behaving very badly (using to much RAM beyond WORK_MEM, amd also badly estimating
the#rows and taking forever...)
 

Huge memory use sounds like what was fixed in postgres 13.

https://www.postgresql.org/docs/13/release-13.html

Allow hash aggregation to use disk storage for large aggregation result
sets (Jeff Davis)

Previously, hash aggregation was avoided if it was expected to use more
than work_mem memory. Now, a hash aggregation plan can be chosen despite
that. The hash table will be spilled to disk if it exceeds work_mem
times hash_mem_multiplier.

This behavior is normally preferable to the old behavior, in which once
hash aggregation had been chosen, the hash table would be kept in memory
no matter how large it got — which could be very large if the planner
had misestimated. If necessary, behavior similar to that can be obtained
by increasing hash_mem_multiplier.

-- 
Justin



pgsql-performance by date:

Previous
From: João Paulo Luís
Date:
Subject: Postgres12 looking for possible HashAggregate issue workarounds?
Next
From: "Render Comunicacion S.L."
Date:
Subject: Re: JSON down performacen when id:1