Re: Memory exhaustion due to temporary tables? - Mailing list pgsql-general

From Thomas Carroll
Subject Re: Memory exhaustion due to temporary tables?
Date
Msg-id 1053026855.14476215.1546894481622@mail.yahoo.com
Whole thread Raw
In response to Memory exhaustion due to temporary tables?  (Thomas Carroll <tomfecarroll@yahoo.com>)
List pgsql-general
> In the meantime, you might think about switching over to some process
> that doesn't create and drop the same table constantly.  Perhaps
> along the lines of

> create temp table if not exists tt_preTally (...) on commit delete rows;

> if (tt_preTally contains no rows) then
>  insert into tt_preTally select ...
> end if;

> This'd have the advantage of reducing catalog churn in other catalogs
> besides pg_statistic, too.

Just to close the loop, doing the CREATE TEMP TABLE IF NOT EXISTS etc. helped a lot.  Still a tiny increase in memory over time, but this is much much better.

Thanks for the help, and amazed at the responsiveness of Mr. Lane and this community!

Tom Carroll

On Monday, December 10, 2018, 4:59:47 PM EST, Laurenz Albe <laurenz.albe@cybertec.at> wrote:


Thomas Carroll wrote:

> We have a situation where a long-persistent Postgres connection consumes more
> and more memory.  If ignored, we eventually get “Cannot allocate memory”
> errors in the Postgres log.  If still ignored, the box will eventually crash.
> This takes about 3 weeks to happen.  It issues a call to a single function
> about once every 15 seconds.  What can I do to prevent this outcome?


There should be a memory context dump in the log file.
What does it look like?

Yours,
Laurenz Albe
--



Thanks for your reply.  There are many memory context dumps






pgsql-general by date:

Previous
From: Andrew Gierth
Date:
Subject: Re: Not sure which part of the query needs optimization
Next
From: "W.P."
Date:
Subject: (multiplatform) replication problem