Re: Large number of partitions of a table - Mailing list pgsql-admin

From Tom Lane
Subject Re: Large number of partitions of a table
Date
Msg-id 268771.1642489687@sss.pgh.pa.us
Whole thread Raw
In response to Re: Large number of partitions of a table  (Victor Sudakov <vas@sibptus.ru>)
Responses Re: Large number of partitions of a table  (Victor Sudakov <vas@sibptus.ru>)
List pgsql-admin
Victor Sudakov <vas@sibptus.ru> writes:
> Tom Lane wrote:
>> Well, yeah, actually.  An ill-advised query will blow out your backend's
>> memory consumption, potentially leading to a SIGKILL from the dreaded OOM
>> killer[1] (if you're on Linux), resulting in a backend crash and cluster
>> restart.  

> Why should "SELECT COUNT(*) FROM t" ever consume more than work_mem
> even if t has 10000 partitions?

Sure, COUNT(*)'s runtime memory consumption is negligible.
But you're not thinking about overhead --- specifically,

1. 10000 relcache entries for the base tables.

2. If you have N indexes per table, N*10000 relcache entries for
the indexes.  (The planner will probably have looked at all those
indexes, even if it didn't find any use for them.)

3. 10000 SeqScan plan nodes and associated rangetable entries,

4. Likewise, 10000 instances of executor per-node state.

5. 10000 lock table entries (both shared and local lock tables).

6. Probably a few per-relation things I didn't think of.

Admittedly, each of those individual data structures is going
to be at most in the few-kilobytes range, so this is going to total
somewhere in the hundreds of megabytes, which you can probably
tolerate on modern hardware --- unless some dozens of sessions
are all doing the same kind of thing.

Basically, this is likely to work until it doesn't ... and
when it doesn't, you're going to be in the office at 3AM
trying to wrangle a recalcitrant system.  Just sayin'.

            regards, tom lane



pgsql-admin by date:

Previous
From: Daulat
Date:
Subject: pgbackrest error
Next
From: Ron
Date:
Subject: Re: pgbackrest error