Query on partitioned table needs memory n_partitions * work_mem - Mailing list pgsql-general

From Dimitrios Apostolou
Subject Query on partitioned table needs memory n_partitions * work_mem
Date
Msg-id 3603c380-d094-136e-e333-610914fb3e80@gmx.net
Whole thread Raw
Responses Re: Query on partitioned table needs memory n_partitions * work_mem
List pgsql-general
Hello list,

I have a table with 1000 partitions on PostgreSQL 16.
I notice that a fairly complicated query of the form:

SELECT ... GROUP BY ... LIMIT ...

causes the postgres backend process to grow insanely very fast, and the
kernel OOM killer to kill it rather soon.
It seems it tries to allocate at least 1000 * work_mem.

If I reduce the amount of work_mem, I can control the outcome and avoid
the crash, but this is suboptimal.
I have parallel plans disabled (max_parallel_workers_per_gather=0).

To add a bit more info on the execution plan, I believe the relevant part
is the 1000 HashAggregate nodes under Append:

->  Append
   ->  HashAggregate
     ->  Seq Scan
   -> ... 1000 more hashagg+seqscans


Is this allocation pattern (workmem * n_partitions) expected under any
scenario? I can't find it documented.  AFAIU the backend should allocate
up to (depth_of_execution_plan * work_mem) (putting aside the
hash_mem_multiplier and the parallel workers).

NOTE: after having written the above message, it occured to me that I have
enable_partitionwise_aggregate=on. And Turning it off fixes the issue and
makes the query faster too! Expected behaviour or bug?

Thank you in advance,
Dimitris

P.S. In the meantime I'm trying to reduce the query and the table schema,
      in order to submit a precise bug report with repro instructions.




pgsql-general by date:

Previous
From: Adrian Klaver
Date:
Subject: Re: Dropping column from big table
Next
From: Han Tang
Date:
Subject: Postgresql range_agg() Return empty list