Re: huge memory of Postgresql backend process - Mailing list pgsql-bugs

From David Rowley
Subject Re: huge memory of Postgresql backend process
Date
Msg-id CAApHDvrPnA44tUrMXtdtvjDzCBTWZ-3gfvJi9QUST39m=nbOiQ@mail.gmail.com
Whole thread Raw
In response to RE: huge memory of Postgresql backend process  ("James Pang (chaolpan)" <chaolpan@cisco.com>)
List pgsql-bugs
On Sat, 10 Sept 2022 at 22:50, James Pang (chaolpan) <chaolpan@cisco.com> wrote:
>   We plan to convert one large partition table to multiple non-partitioned tables, for example , one 256 HASH
partitiontable to 256 small tables. Then application will query small tables directly to get quick sql response time.
Doesthat help reduce backend process memory for "metadata", I can expect to reduce partition related cache of backend
process,but the  table/index count is same. 

Not really. The metadata (aka relcache) we're talking about is per
relation and it is loaded into the backend when a relation is first
accessed in a backend.  Both tables and partitions are relations.  A
partition may use slightly more memory in the relcache for storing the
partition constraint.  However, that's probably quite insignificant
compared to the other data stored in relcache.  The key here is likely
how many relations are being accessed from a given backend.  HASH
partitioning does tend to lend itself to many partitions being
accessed in a short space of time.  That's quite different from say,
having a RANGE partitioned table on time-series data with one
partition per month. Your workload might only access 1 partition per
month, in that case.

You've not really explained your use case, so it's hard to know if
HASH partitioning is best suited for your schema or not.

David



pgsql-bugs by date:

Previous
From: Tom Lane
Date:
Subject: Re: BUG #17615: Getting error while inserting records in the table: invalid byte sequence for encoding "UTF8": 0xae
Next
From: David Rowley
Date:
Subject: Re: huge memory of Postgresql backend process