On Thu, 8 Sept 2022 at 08:05, James Pang (chaolpan) <chaolpan@cisco.com> wrote:
Hi, When I convert the partitioned table to non-partitioned and copy all data to non-partitioned tables, then restart the load test , one backend server only consumes 25mb there. With partitioned tables , PGV13 , 160-170mb /per backend server, PGV14, 130-138mb/per backend server. So , it's partitioned tables make the memory consumption changes. The dumped stats is backend(session) level cached plans ,right? The test servers use shared connection pooling to run same insert/update/delete transaction by multiple connections(we simulate 300 connections) , so each session see similar cached SQL plans, and part of table has trigger before UPDATE, so when UPDATE it trigger to call pl/pgsql function. Another thing is even after the backend server idle there long time, it's still keep the same memory without release back to OS.
If you are using a connection pool, then the connections aren't closed so I don't see this an issue.
Dave
I only use psql to make same prepared SQL and run that in a loop, I see stable memory usage, maybe my psql test is not same as the JAVA test code. I will check the test code details and try to check if possible to dump more context details.
> interesting thing is we only see this issue by JDBC driver client
First of all, it turns out that a single UPDATE statement consumes 4M
Then, it looks like you have **multiple** UPDATE statements in the server-side cache. It does sound strange that a single backend contains multiple entries for the same SQL text.
1) Would you please double-check that SQL text is the same. Do you use bind variables? 2) Would you please double-check that you close statements after use (e.g. try-with-resources).
CachedPlan: 4204544 total in 13 blocks; 489400 free (4 chunks); 3715144 used: UPDATE WBXMEETINGINS
Frankly speaking, I am not sure the JDBC driver is in a position to predict that a single-line statement would consume that much server-side memory.
It would be nice if backend devs could optimize the memory consumption of the cached plan. If optimization is not possible, then it would be nice if the backend could provide clients with memory consumption of the cached plan. In other words, it would be nice if there was a status message or something that says "ok, by the way, the prepared statement S_01 consumes 2M".
James, the captured dump includes only the first 100 entries. Would you please try capturing more details via the following command?