Re: Adding REPACK [concurrently] - Mailing list pgsql-hackers
| From | Antonin Houska |
|---|---|
| Subject | Re: Adding REPACK [concurrently] |
| Date | |
| Msg-id | 27869.1777985266@localhost Whole thread |
| In response to | Re: Adding REPACK [concurrently] (Antonin Houska <ah@cybertec.at>) |
| Responses |
Re: Adding REPACK [concurrently]
|
| List | pgsql-hackers |
Antonin Houska <ah@cybertec.at> wrote:
> Mihail Nikalayeu <mihailnikalayeu@gmail.com> wrote:
>
> > On Mon, Apr 27, 2026 at 6:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
> > > Alvaro, others, what is your take on this?
> >
> > I agree with you here - we should AT LEAST make that an ERROR instead
> > of an assert and also check it during cache access (not only during
> > the scan because of cache misses).
> > But I think it will still be fragile in case of some extensions installed.
> >
> > Anyway... We also have an issue with correctness right now.
> >
> > I took the old stress test from [0] (the first two) and it fails now,
> > even with the fix from [1] ("Possible premature SNAPBUILD_CONSISTENT
> > with DB-specific running_xacts").
> >
> > It looks like [1] fixes 008_repack_concurrently.pl, but
> > 007_repack_concurrently.pl fails anyway, including
> >
> > pgbench: error: client 1 script 0 aborted in command 10 query 0:
> > ERROR: could not create unique index "tbl_pkey_repacknew"
> > # DETAIL: Key (i)=(383) is duplicated.
> > and
> > 'pgbench: error: pgbench:client 23 script 0 aborted in command 31
> > query 0: ERROR: division by zero
> >
> > Last one is not MVCC-related; you can see from the logs that it
> > performs something like SELECT (509063) / 0 when the table sum
> > changes.
> >
> > Setting need_shared_catalogs = true make them pass, so something is
> > wrong with its correctness.
>
> Thanks for testing again. Whether we keep the "database specific slots" or
> not, it'd be good to know what exactly the reason of these errors is. I wonder
> if the feature just exposes a problem that remains shadowed otherwise, due to
> the contention on replication slot. I'm going to investigate.
I think the problem is that with database-specific snapshot,
SnapBuildProcessRunningXacts() returns early, w/o adjusting builder->xmin
/*
* Database specific transaction info may exist to reach CONSISTENT state
* faster, however the code below makes no use of it. Moreover, such
* record might cause problems because the following normal (cluster-wide)
* record can have lower value of oldestRunningXid. In that case, let's
* wait with the cleanup for the next regular cluster-wide record.
*/
if (OidIsValid(running->dbid))
return;
and thus some transactions whose XID is below running->oldestRunningXid may
continue to be incorrectly considered running.
I originally thought that this should not happen because such transactions
will be added to the builder's array of committed transactions by
SnapBuildCommitTxn() anyway. However, I failed to notice that COMMIT record of
a transaction listed in the xl_running_xacts WAL record is not guaranteed to
follow the xl_running_xacts record in WAL. In other words, even if
xl_running_xacts is created before a COMMIT record of the contained
transaction, it may end up at higher LSN in WAL. So the cleanup I relied on
might not take place.
I've got no good idea how to fix that. Not sure I'm able to pursue the
"database-specific snapshots" feature now.
--
Antonin Houska
Web: https://www.cybertec-postgresql.com
pgsql-hackers by date: