Re: Adding REPACK [concurrently] - Mailing list pgsql-hackers

From Srinath Reddy Sadipiralla
Subject Re: Adding REPACK [concurrently]
Date
Msg-id CAFC+b6qNx4w-Ydyr=R3sC603CqMWQtpO+mxhksre+=kb3NQp0w@mail.gmail.com
Whole thread
In response to Re: Adding REPACK [concurrently]  (Antonin Houska <ah@cybertec.at>)
List pgsql-hackers
Hi Antonin,

On Wed, Apr 1, 2026 at 10:36 PM Antonin Houska <ah@cybertec.at> wrote:
Srinath Reddy Sadipiralla <srinath2133@gmail.com> wrote:

> i was fuzz testing v48 , and found a crash when REPACK (concurrently) itself errors out,
> 1) while running
>
> create table test(id int);
> REPACK (concurrently) test;
>
> TBH i didn't knew this, sometimes it's better to not know "rules" ;)
> [NOTE: maybe we should add that we can't run
> REPACK (concurrently) on table without identity index or primary key into repack.sgml]
>
> ERROR:  cannot process relation "test"
> 2026-04-01 19:06:31.211 IST [2495575] HINT:  Relation "test" has no identity index.
> 2026-04-01 19:06:31.211 IST [2495575] STATEMENT:  repack (concurrently) test;
> TRAP: failed Assert("proc->statusFlags == ProcGlobal->statusFlags[proc->pgxactoff]"), File: "procarray.c", Line: 719, PID: 2495575
> Here's the diff to solve this crash.

Thanks. Attached here is v48-0006 fixed.

On Wed, Apr 1, 2026 at 8:25 PM Srinath Reddy Sadipiralla <srinath2133@gmail.com> wrote:
Here's the diff to solve this crash.
  
diff --git a/src/backend/commands/repack.c b/src/backend/commands/repack.c
index 29ba49744eb..d44092a407a 100644
--- a/src/backend/commands/repack.c
+++ b/src/backend/commands/repack.c
@@ -284,7 +284,23 @@ ExecRepack(ParseState *pstate, RepackStmt *stmt, bool isTopLevel)
  * that others can conflict with.
  */
  if ((params.options & CLUOPT_CONCURRENT) != 0)
+ {
+ /*
+ * Do not let other backends wait for our completion during their
+ * setup of logical replication. Unlike logical replication publisher,
+ * we will have XID assigned, so the other backends - whether
+ * walsenders involved in logical replication or regular backends
+ * executing also REPACK (CONCURRENTLY) - would have to wait for our
+ * completion before they can build their initial snapshot. It is o.k.
+ * for any decoding backend to ignore us because we do not change
+ * tuple descriptor of any table, and the data changes we write should
+ * not be decoded by other backends.
+ */
+ LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);
  MyProc->statusFlags |= PROC_IN_CONCURRENT_REPACK;
+ ProcGlobal->statusFlags[MyProc->pgxactoff] = MyProc->statusFlags;
+ LWLockRelease(ProcArrayLock);
+ }
 
  /*
  * If a single relation is specified, process it and we're done ... unless
@@ -988,22 +1004,6 @@ rebuild_relation(Relation OldHeap, Relation index, bool verbose,
 
  if (concurrent)
  {
- /*
- * Do not let other backends wait for our completion during their
- * setup of logical replication. Unlike logical replication publisher,
- * we will have XID assigned, so the other backends - whether
- * walsenders involved in logical replication or regular backends
- * executing also REPACK (CONCURRENTLY) - would have to wait for our
- * completion before they can build their initial snapshot. It is o.k.
- * for any decoding backend to ignore us because we do not change
- * tuple descriptor of any table, and the data changes we write should
- * not be decoded by other backends.
- */
- LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);
- MyProc->statusFlags |= PROC_IN_CONCURRENT_REPACK;
- ProcGlobal->statusFlags[MyProc->pgxactoff] = MyProc->statusFlags;
- LWLockRelease(ProcArrayLock);
-
  /*
  * The worker needs to be member of the locking group we're the leader
  * of. We ought to become the leader before the worker starts. The
 
i think as i did earlier in the diff, shouldn't we remove the duplicate code
from rebuild_relation, am i missing something?


--
Thanks,
Srinath Reddy Sadipiralla
EDB: https://www.enterprisedb.com/

pgsql-hackers by date:

Previous
From: Thomas Munro
Date:
Subject: Re: pg_waldump: support decoding of WAL inside tarfile
Next
From: Srinath Reddy Sadipiralla
Date:
Subject: Re: Adding REPACK [concurrently]