Re: huge memory of Postgresql backend process - Mailing list pgsql-bugs

From David Rowley
Subject Re: huge memory of Postgresql backend process
Date
Msg-id CAApHDvq+f8VwftnPQByZF==e+wjBiaQcEuPO2dye9q1ux4tpnA@mail.gmail.com
Whole thread Raw
In response to RE: huge memory of Postgresql backend process  ("James Pang (chaolpan)" <chaolpan@cisco.com>)
List pgsql-bugs
On Tue, 13 Sept 2022 at 20:50, James Pang (chaolpan) <chaolpan@cisco.com> wrote:
>    Server prepared sql statement memory allocation , depends on partition count , for 256 partition count, it
initiallyasking about 60MB memory one time, then several MB memory for following sql statements depends on the SQL
statement.  Is this kind of memory allocation is expected ?  or maybe fine tuned memory allocation for "large
connectionsuser case" to avoid out of memory issue? 
>   Another interesting point is , when use psql .. -h localhost ,  the memory allocation is much less (only several
MB)with above test , since local running ok with same SQL and same table,  for remote connection, need much more memory
instead.

It's important to understand that JDBC is probably using PREPAREd
statements. If you're just using psql to execute the queries directly,
then no query plan is stored in the backend memory after the execution
of the query has completed. With PREPARE statements, the plan will be
stored after the first EXECUTE and only released when you DEALLOCATE
the cached plan or close the connection.  In PostgreSQL 14 you can use
"select * from pg_get_backend_memory_contexts();" to get a better
understanding of the memory allocations within the backend you're
querying from. Look out for rows with name set to CachedPlanSource and
CachedPlanQuery.

David



pgsql-bugs by date:

Previous
From: David Rowley
Date:
Subject: Re: huge memory of Postgresql backend process
Next
From: Jeff Davis
Date:
Subject: Re: PANIC in heap_delete during ALTER TABLE