PG Bug reporting form <noreply@postgresql.org> writes:
> We noticed the same issue described below occurring multiple times in some
> of our many Postgres 9.5 deployments.
> The application makes extensive use of temporary tables. Although there are
> scheduled activities for regularly vacuuming the database, after some months
> of uptime the database gets into the 1 million transactions limit.
Couple of possibilities here ...
Autovacuum, and manual vacuum too, cannot do anything with temp tables
belonging to other sessions. (It's physically impossible because the
live data for a temp table is likely to only exist inside the memory
of the owning session.) So, if you have temp tables that survive long
enough to need vacuuming because of wraparound considerations, then
the owning session has to do that itself.
Alternatively, maybe you don't have any temp-table-owning sessions
that live that long, but for some reason some temp tables failed to
get dropped when their owning session exited. (AFAIK this could only
happen as part of a database crash --- is that a common occurrence
for you?) They're still there and still holding back the global XID
horizon, so the only way to fix things is to forcibly drop them.
PG versions later than 9.5 have added increasingly aggressive rules for
automatically dropping such orphan temp tables. From memory, 9.5 does
have some behavior of that sort; but it's evidently not firing for you.
What you'll need to do is find the old tables (look for pg_class entries
with large values of age(relfrozenxid)) and manually DROP those tables.
I would not recommend the wholesale removal of temp schemas shown in
your script, because that carries too much risk of removing tables
that are in active use.
regards, tom lane