Re: Suggestions to overcome 'multixact "members" limit exceeded' in temporary tables - Mailing list pgsql-hackers

From Jim Vanns
Subject Re: Suggestions to overcome 'multixact "members" limit exceeded' in temporary tables
Date
Msg-id CAH7vdhMB1Tej0XQC8oObDF_iraiknmjOmBkEEf4=m9abbtBLNg@mail.gmail.com
Whole thread Raw
In response to Suggestions to overcome 'multixact "members" limit exceeded' in temporary tables  (Jim Vanns <jvanns@ilm.com>)
List pgsql-hackers
I've been able to observe that the performance degradation with TRUNCATE appears to happen when other ancillary processes are running that are also heavy users of temporary tables. If I used an exclusive tablespace, would that improve things?

Cheers

Jim


On Wed, 31 Jul 2024 at 15:16, Jim Vanns <jvanns@ilm.com> wrote:
I've reached the limit of my understanding and attempts at correcting my code/use of temporary tables in the face of multixact members and have come to ask for your help! Here's a brief description of my software;

Pool of N connection sessions, persistent for the duration of the program lifetime.
Upon each session initialisation, a set of CREATE TEMPORARY TABLE ON COMMIT DELETE ROWS statements are made for bulk ingest.
Each session is acquired by a thread for use when ingesting data and therefore each temporary table remains until the session is terminated
The thread performs a COPY <temp table> FROM STDIN in binary format
Then an INSERT INTO <main table> SELECT FROM <temp table> WHERE...

This has been working great for a while and with excellent throughput. However, upon scaling up I eventually hit this error;

ERROR:  multixact "members" limit exceeded
DETAIL:  This command would create a multixact with 2 members, but the remaining space is only enough for 0 members.
HINT:  Execute a database-wide VACUUM in database with OID 16467 with reduced vacuum_multixact_freeze_min_age and vacuum_multixact_freeze_table_age settings.

And it took me quite a while to identify that it appears to be coming from the temporary table (the other 'main' tables were being autovacuumed OK) - which makes sense because they have a long lifetime, aren't auto vacuumed and shared by transactions (in turn).

I first attempted to overcome this by introducing an initial step of always creating the temporary table before the copy (and using on commit drop) but this lead to a terrible performance degradation.
Next, I reverted the above and instead I introduced a VACUUM step every 1000000 (configurable) ingest operations
Finally, I introduced a TRUNCATE step in addition to the occasional VACUUM since the TRUNCATE allowed the COPY option of FREEZE.

The new overhead appears minimal until after several hours and again I've hit a performance degradation seemingly dominated by the TRUNCATE.

My questions are;

1) Is the VACUUM necessary if I use TRUNCATE + COPY FREEZE (on the temporary table)?
2) Is there really any benefit to using FREEZE here or is it best to just VACUUM the temporary tables occasionally?
3) Is there a better way of managing all this!? Perhaps re-CREATING the TT every day or something?

I understand that I can create a Linux tmpfs partition for a tablespace for the temporary tables and that may speed up the TRUNCATE but that seems like a hack and I'd rather not do it at all if it's avoidable.

Thanks for your help,

Jim

PS. PG version in use is 15.4 if that matters here

--
Jim Vanns
Principal Production Engineer
Industrial Light & Magic, London


--
Jim Vanns
Principal Production Engineer
Industrial Light & Magic, London

pgsql-hackers by date:

Previous
From: Antonin Houska
Date:
Subject: Re: why there is not VACUUM FULL CONCURRENTLY?
Next
From: Andrew Dunstan
Date:
Subject: Re: Recent 027_streaming_regress.pl hangs