Re: Don't keep closed WAL segment in page cache after replay - Mailing list pgsql-hackers

From Hüseyin Demir
Subject Re: Don't keep closed WAL segment in page cache after replay
Date
Msg-id 177243929182.626.15849688898446231987.pgcf@coridan.postgresql.org
Whole thread Raw
In response to Re: Don't keep closed WAL segment in page cache after replay  (Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>)
Responses Re: Don't keep closed WAL segment in page cache after replay
List pgsql-hackers
Hi, 

The idea looks good and efficient albeit I have some feedback. The first one is about logical replication slot. 

Inside the patch, it checks if there is an active walsender process. Is it possible to create a replication slot and
waituntil a subscriber will connect it. During this time due to patch PostgreSQL will close the WAL segments on the
memoryand once the subscriber connects it has to read the WAL files from disk. But it's a trade-off and can be decided
byothers too. 
 

+/*
+ * Return true if there's at least one active walsender process
+ */
+bool
+WalSndRunning(void)
+{
+   int         i;
+
+   for (i = 0; i < max_wal_senders; i++)
+   {
+       WalSnd     *walsnd = &WalSndCtl->walsnds[i];
+
+       SpinLockAcquire(&walsnd->mutex);
+       if (walsnd->pid > 0)
+       {
+           SpinLockRelease(&walsnd->mutex);
+           return true;
+       }
+       SpinLockRelease(&walsnd->mutex);
+   }
+   return false;
+}
+

Secondly, when it comes to using spinlock to check the running walsender processes it can lead inefficient recovery
process.Because assuming that a database with max_wal_sender set to 20+ and producing more than 4-5TB WAL in a day it
cancause additional +100-200 spinlocks each second on walreceiver.  Put simply, WalSndRunning() scans every walsender
slotwith spinlocks on every segment switch, contending with all active walsenders updating their own slots. On
high-throughputstandbys this creates unnecessary cross-process spinlock contention in the recovery hot path — the
 
exact path that should be as lean as possible for fast replay. Maybe you can implement a single pg_atomic_uint32
counterin WalSndCtlData and achieve the same result with zero contention.
 

Regards.

pgsql-hackers by date:

Previous
From: Jakub Wartak
Date:
Subject: Re: pg_stat_io_histogram
Next
From: "Joel Jacobson"
Date:
Subject: Re: [BUG?] estimate_hash_bucket_stats uses wrong ndistinct for avgfreq