Re: TRUNCATE memory leak with temporary tables? - Mailing list pgsql-general

From Tom Lane
Subject Re: TRUNCATE memory leak with temporary tables?
Date
Msg-id 1989763.1622218701@sss.pgh.pa.us
Whole thread Raw
In response to Re: TRUNCATE memory leak with temporary tables?  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
I wrote:
> However ... what seems odd is that we only get an inval message when
> there's an index involved.  In principle such messages need to be
> issued against the table as well.  I suspect that we're optimizing
> away the message for the table, either on the grounds that it's temp
> or that it's new in the current transaction; but the index code path
> didn't get the same TLC.

Yeah, after a bit of digging, ExecuteTruncateGuts knows this:

         * Normally, we need a transaction-safe truncation here.  However, if
         * the table was either created in the current (sub)transaction or has
         * a new relfilenode in the current (sub)transaction, then we can just
         * truncate it in-place, because a rollback would cause the whole
         * table or the current physical file to be thrown away anyway.

The in-place-truncate code path isn't issuing any invalidation message,
which seems okay as far as I can think at the moment.  But that only
applies to truncating the table proper.  Indexes are "truncated" by
rebuilding them as empty, so that any required infrastructure such
as metapages will be recreated properly.  So that's largely the same
code path as a REINDEX, and index.c is careful to send out an inval
for that:

 * NOTE: an important side-effect of this operation is that an SI invalidation
 * message is sent out to all backends --- including me --- causing relcache
 * entries to be flushed or updated with the new data.  This must happen even
 * if we find that no change is needed in the pg_class row.  When updating
 * a heap entry, this ensures that other backends find out about the new
 * index.  When updating an index, it's important because some index AMs
 * expect a relcache flush to occur after REINDEX.

Maybe that could be optimized in this situation, but it's not totally
clear how, especially given the angle about index AM expectations.

Anyway, I don't see a lot of low-hanging fruit here as far as avoiding
storing such messages is concerned.

We could reduce the rate of the leak if inval.c were more stingy about
memory allocation for the invalidation message list.  In this particular
example, it's allocating space for 32 (FIRSTCHUNKSIZE) messages per
statement, but then only using 2.  So perhaps FIRSTCHUNKSIZE could be
reduced?  I'm not sure that anyone has ever made a survey of how many
messages different sorts of DDL tend to produce, so I suspect that number
was mostly plucked from the air.

A more aggressive idea that I've thought of but never done anything about
is that once we've accumulated more than a certain number of messages,
maybe we should just figure on issuing a full cache reset rather than
continuing to track individual inval events.  This'd not only bound
the storage required for pending invals, but very possibly be more
efficient when the time comes to actually process them.

            regards, tom lane



pgsql-general by date:

Previous
From: Ron
Date:
Subject: Re: How long to get a password reset ???
Next
From: Rob Sargent
Date:
Subject: Re: How different is AWS-RDS postgres?