On 2026-Apr-03, Antonin Houska wrote:
> This is an alternative implementation of 0006, allowing one backend running
> REPACK (CONCURRENTLY) in a database, instead of one backend in a cluster.
Thanks! so I'm removing the previous one and taking this one. Here's a
v50:
- In testing, I noticed that we could sometimes request a Flush for a
WAL position that hasn't been written yet. This is due to my
replacing the original code that wrote a dummy xlog record that we
could flush, with a call to XLogGetInsertRecPtr(). So we'd get an
error like
LOG: request to flush past end of generated WAL; request 0/15CF0018, current position 0/15CF000
Antonin promptly noticed that this is because XLogGetInsertRecPtr()
gets the LSN past the segment header, which is 18 bytes wrong. So the
fix here is to use XLogGetInsertEndRecPtr() instead.
- My testing also uncovered a problem with exclusion constraints; tables
with them would fail to repack like
ERROR: exclusion constraint record missing for rel temporal_fk_mltrng2mltrng_pk_repacknew
Antonin sent a patch to create copies of the constraints on the
transient index, which seems like it fixes the problem. Those
constraints are obviously discarded together with the transient index.
- I polished the patch to reserve replication slots for REPACK. Given
the new implementation of 0006 that was submitted implies that we can
now run multiple repacks concurrently, I changed the default of 1 to 5.
--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
"Here's a general engineering tip: if the non-fun part is too complex for you
to figure out, that might indicate the fun part is too ambitious." (John Naylor)
https://postgr.es/m/CAFBsxsG4OWHBbSDM%3DsSeXrQGOtkPiOEOuME4yD7Ce41NtaAD9g%40mail.gmail.com