Memory overhead of a large number of partitions in the same table - Mailing list pgsql-general

From César Muñoz
Subject Memory overhead of a large number of partitions in the same table
Date
Msg-id CAMXZnGjLO2V+4nht7xvv04K4hE6w3DKbRZZKky1t6eYSEa9Sug@mail.gmail.com
Whole thread Raw
List pgsql-general
I would like to understand how much can partitioning can contribute to memory usage in Postgres backend processes.

My application mainly runs SELECT statements in a partitioned table. The table is partitioned by size (100K rows per partition, 730 partitions in total). However, each Postgres backend process to run these SELECTs takes ~300 MB in memory, which seems a lot compared to what I've seen around.

work_mem is set to 16MB, so nothing too crazy. Also, I am using Pss to measure the memory used by each backend process so that shared_buffers don't mess the whole thing. Here's the output for a couple of them

Pss:             1260383 kB
Pss_Anon:         305501 kB
Pss_File:            199 kB
Pss_Shmem:        954682 kB

Pss:             1247796 kB
Pss_Anon:         293041 kB
Pss_File:            200 kB
Pss_Shmem:        954554 kB

Is this expected? Or the overhead of partitions should be smaller in terms of memory consumption? I'm using Partman to partition the table.

Also, shared_buffers for this instance is 25GB with a total of 60GB memory. I can provide output for explain (analyze, buffers) if needed, but the main index used for queries is fully in memory (it takes around 18GB of space).

pgsql-general by date:

Previous
From: Aleš Zelený
Date:
Subject: PostgreSQL 17.5 - could not map dynamic shared memory segment
Next
From: Aleš Zelený
Date:
Subject: Re: PostgreSQL 17.5 - could not map dynamic shared memory segment