Hi,
The idea looks good and efficient albeit I have some feedback. The first one is about logical replication slot.
Inside the patch, it checks if there is an active walsender process. Is it possible to create a replication slot and
waituntil a subscriber will connect it. During this time due to patch PostgreSQL will close the WAL segments on the
memoryand once the subscriber connects it has to read the WAL files from disk. But it's a trade-off and can be decided
byothers too.
+/*
+ * Return true if there's at least one active walsender process
+ */
+bool
+WalSndRunning(void)
+{
+ int i;
+
+ for (i = 0; i < max_wal_senders; i++)
+ {
+ WalSnd *walsnd = &WalSndCtl->walsnds[i];
+
+ SpinLockAcquire(&walsnd->mutex);
+ if (walsnd->pid > 0)
+ {
+ SpinLockRelease(&walsnd->mutex);
+ return true;
+ }
+ SpinLockRelease(&walsnd->mutex);
+ }
+ return false;
+}
+
Secondly, when it comes to using spinlock to check the running walsender processes it can lead inefficient recovery
process.Because assuming that a database with max_wal_sender set to 20+ and producing more than 4-5TB WAL in a day it
cancause additional +100-200 spinlocks each second on walreceiver. Put simply, WalSndRunning() scans every walsender
slotwith spinlocks on every segment switch, contending with all active walsenders updating their own slots. On
high-throughputstandbys this creates unnecessary cross-process spinlock contention in the recovery hot path — the
exact path that should be as lean as possible for fast replay. Maybe you can implement a single pg_atomic_uint32
counterin WalSndCtlData and achieve the same result with zero contention.
Regards.