Thread: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers
Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers
Hi, It looks like the logical replication subscribers are receiving the quorum uncommitted transactions even before the synchronous (sync) standbys. Most of the times it is okay, but it can be a problem if the primary goes down/crashes (while the primary is in SyncRepWaitForLSN) before the quorum commit is achieved (i.e. before the sync standbys receive the committed txns from the primary) and the failover is to happen on to the sync standby. The subscriber would have received the quorum uncommitted txns whereas the sync standbys didn't. After the failover, the new primary (the old sync standby) would be behind the subscriber i.e. the subscriber will be seeing the data that the new primary can't. Is there a way the subscriber can get back to be in sync with the new primary? In other words, can we reverse the effects of the quorum uncommitted txns on the subscriber? Naive way is to do it manually, but it doesn't seem to be elegant. We have performed a small experiment to observe the above behaviour with 1 primary, 1 sync standby and 1 subscriber: 1) Have a wait loop in SyncRepWaitForLSN (a temporary hack to illustrate the standby receiving the txn a bit late or fail to receive) 2) Insert data into a table on the primary 3) The primary waits i.e. the insert query hangs (because of the wait loop hack ()) before the local txn is sent to the sync standby, whereas the subscriber receives the inserted data. 4) If the primary crashes/goes down and unable to come up, if the failover happens to sync standby (which didn't receive the data that got inserted on tbe primary), the subscriber would see the data that the sync standby can't. This looks to be a problem. A possible solution is to let the subscribers receive the txns only after the primary achieves quorum commit (gets out of the SyncRepWaitForLSN or after all sync standbys received the txns). The logical replication walsenders can wait until the quorum commit is obtained and then can send the WAL. A new GUC can be introduced to control this, default being the current behaviour. Thoughts? Thanks Satya (cc-ed) for the use-case and off-list discussion. Regards, Bharath Rupireddy.
Re: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers
- run the pg_rewind on the async replicas for them to reconnect with the new primary or
- collect the latest WAL across the replicas and feed the standby.
Hi,
It looks like the logical replication subscribers are receiving the
quorum uncommitted transactions even before the synchronous (sync)
standbys. Most of the times it is okay, but it can be a problem if the
primary goes down/crashes (while the primary is in SyncRepWaitForLSN)
before the quorum commit is achieved (i.e. before the sync standbys
receive the committed txns from the primary) and the failover is to
happen on to the sync standby. The subscriber would have received the
quorum uncommitted txns whereas the sync standbys didn't. After the
failover, the new primary (the old sync standby) would be behind the
subscriber i.e. the subscriber will be seeing the data that the new
primary can't. Is there a way the subscriber can get back to be in
sync with the new primary? In other words, can we reverse the effects
of the quorum uncommitted txns on the subscriber? Naive way is to do
it manually, but it doesn't seem to be elegant.
We have performed a small experiment to observe the above behaviour
with 1 primary, 1 sync standby and 1 subscriber:
1) Have a wait loop in SyncRepWaitForLSN (a temporary hack to
illustrate the standby receiving the txn a bit late or fail to
receive)
2) Insert data into a table on the primary
3) The primary waits i.e. the insert query hangs (because of the wait
loop hack ()) before the local txn is sent to the sync standby,
whereas the subscriber receives the inserted data.
4) If the primary crashes/goes down and unable to come up, if the
failover happens to sync standby (which didn't receive the data that
got inserted on tbe primary), the subscriber would see the data that
the sync standby can't.
This looks to be a problem. A possible solution is to let the
subscribers receive the txns only after the primary achieves quorum
commit (gets out of the SyncRepWaitForLSN or after all sync standbys
received the txns). The logical replication walsenders can wait until
the quorum commit is obtained and then can send the WAL. A new GUC can
be introduced to control this, default being the current behaviour.
Thoughts?
Thanks Satya (cc-ed) for the use-case and off-list discussion.
Regards,
Bharath Rupireddy.
Re: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers
On Wed, 2022-01-05 at 23:59 -0800, SATYANARAYANA NARLAPURAM wrote: > I would like to propose a GUC send_Wal_after_quorum_committed which > when set to ON, walsenders corresponds to async standbys and logical > replication workers wait until the LSN is quorum committed on the > primary before sending it to the standby. This not only simplifies > the post failover steps but avoids unnecessary downtime for the async > replicas. Thoughts? Do we need a GUC? Or should we just always require that sync rep is satisfied before sending to async replicas? It feels like the sync quorum should always be ahead of the async replicas. Unless I'm missing a use case, or there is some kind of performance gotcha. Regards, Jeff Davis
Re: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers
On Wed, 2022-01-05 at 23:59 -0800, SATYANARAYANA NARLAPURAM wrote:
> I would like to propose a GUC send_Wal_after_quorum_committed which
> when set to ON, walsenders corresponds to async standbys and logical
> replication workers wait until the LSN is quorum committed on the
> primary before sending it to the standby. This not only simplifies
> the post failover steps but avoids unnecessary downtime for the async
> replicas. Thoughts?
Do we need a GUC? Or should we just always require that sync rep is
satisfied before sending to async replicas?
It feels like the sync quorum should always be ahead of the async
replicas. Unless I'm missing a use case, or there is some kind of
performance gotcha.
Regards,
Jeff Davis
Re: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers
At Thu, 6 Jan 2022 23:55:01 -0800, SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote in > On Thu, Jan 6, 2022 at 11:24 PM Jeff Davis <pgsql@j-davis.com> wrote: > > > On Wed, 2022-01-05 at 23:59 -0800, SATYANARAYANA NARLAPURAM wrote: > > > I would like to propose a GUC send_Wal_after_quorum_committed which > > > when set to ON, walsenders corresponds to async standbys and logical > > > replication workers wait until the LSN is quorum committed on the > > > primary before sending it to the standby. This not only simplifies > > > the post failover steps but avoids unnecessary downtime for the async > > > replicas. Thoughts? > > > > Do we need a GUC? Or should we just always require that sync rep is > > satisfied before sending to async replicas? > > > > I proposed a GUC to not introduce a behavior change by default. I have no > strong opinion on having a GUC or making the proposed behavior default, > would love to get others' perspectives as well. > > > > > > It feels like the sync quorum should always be ahead of the async > > replicas. Unless I'm missing a use case, or there is some kind of > > performance gotcha. > > > > I couldn't think of a case that can cause serious performance issues but > will run some experiments on this and post the numbers. I think Jeff is saying that "quorum commit" already by definition means that all out-of-quorum standbys are behind of the quorum-standbys. I agree to that in a dictionary sense. But I can think of the case where the response from the top-runner standby vanishes or gets caught somewhere on network for some reason. In that case the primary happily checks quorum ignoring the top-runner. To avoid that misdecision, I can guess two possible "solutions". One is to serialize WAL sending (of course it is unacceptable at all) or aotehr is to send WAL to all standbys at once then make the decision after making sure receiving replies from all standbys (this is no longer quorum commit in another sense..) So I'm afraid that there's no sensible solution to avoid the hiding-forerunner problem on quorum commit. regards. -- Kyotaro Horiguchi NTT Open Source Software Center
Re: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers
At Thu, 6 Jan 2022 23:55:01 -0800, SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote in
> On Thu, Jan 6, 2022 at 11:24 PM Jeff Davis <pgsql@j-davis.com> wrote:
>
> > On Wed, 2022-01-05 at 23:59 -0800, SATYANARAYANA NARLAPURAM wrote:
> > > I would like to propose a GUC send_Wal_after_quorum_committed which
> > > when set to ON, walsenders corresponds to async standbys and logical
> > > replication workers wait until the LSN is quorum committed on the
> > > primary before sending it to the standby. This not only simplifies
> > > the post failover steps but avoids unnecessary downtime for the async
> > > replicas. Thoughts?
> >
> > Do we need a GUC? Or should we just always require that sync rep is
> > satisfied before sending to async replicas?
> >
>
> I proposed a GUC to not introduce a behavior change by default. I have no
> strong opinion on having a GUC or making the proposed behavior default,
> would love to get others' perspectives as well.
>
>
> >
> > It feels like the sync quorum should always be ahead of the async
> > replicas. Unless I'm missing a use case, or there is some kind of
> > performance gotcha.
> >
>
> I couldn't think of a case that can cause serious performance issues but
> will run some experiments on this and post the numbers.
I think Jeff is saying that "quorum commit" already by definition
means that all out-of-quorum standbys are behind of the
quorum-standbys. I agree to that in a dictionary sense. But I can
think of the case where the response from the top-runner standby
vanishes or gets caught somewhere on network for some reason. In that
case the primary happily checks quorum ignoring the top-runner.
To avoid that misdecision, I can guess two possible "solutions".
One is to serialize WAL sending (of course it is unacceptable at all)
or aotehr is to send WAL to all standbys at once then make the
decision after making sure receiving replies from all standbys (this
is no longer quorum commit in another sense..)
So I'm afraid that there's no sensible solution to avoid the
hiding-forerunner problem on quorum commit.
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
Re: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers
On 1/6/22, 11:25 PM, "Jeff Davis" <pgsql@j-davis.com> wrote: > On Wed, 2022-01-05 at 23:59 -0800, SATYANARAYANA NARLAPURAM wrote: >> I would like to propose a GUC send_Wal_after_quorum_committed which >> when set to ON, walsenders corresponds to async standbys and logical >> replication workers wait until the LSN is quorum committed on the >> primary before sending it to the standby. This not only simplifies >> the post failover steps but avoids unnecessary downtime for the async >> replicas. Thoughts? > > Do we need a GUC? Or should we just always require that sync rep is > satisfied before sending to async replicas? > > It feels like the sync quorum should always be ahead of the async > replicas. Unless I'm missing a use case, or there is some kind of > performance gotcha. I don't have a strong opinion on whether there needs to be a GUC, but +1 for the ability to enforce sync quorum before sending WAL to async standbys. I think that would be a reasonable default behavior. Nathan
Re: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers
On Fri, Jan 7, 2022 at 12:54 PM Jeff Davis <pgsql@j-davis.com> wrote: > > On Wed, 2022-01-05 at 23:59 -0800, SATYANARAYANA NARLAPURAM wrote: > > I would like to propose a GUC send_Wal_after_quorum_committed which > > when set to ON, walsenders corresponds to async standbys and logical > > replication workers wait until the LSN is quorum committed on the > > primary before sending it to the standby. This not only simplifies > > the post failover steps but avoids unnecessary downtime for the async > > replicas. Thoughts? > > Do we need a GUC? Or should we just always require that sync rep is > satisfied before sending to async replicas? > > It feels like the sync quorum should always be ahead of the async > replicas. Unless I'm missing a use case, or there is some kind of > performance gotcha. IMO, having GUC is a reasonable choice because some users might be okay with it if their async replicas are ahead of the sync ones or they would have dealt with this problem already in their HA solutions or they don't want their async replicas to fall behind by the primary (most of the times). If there are long running txns on the primary and the async standbys were to wait until quorum commit from sync standbys, won't they fall behind the primary by too much? This isn't a problem at all if we think from the perspective that async replicas are anyways prone to falling behind by the primary. But, if the primary is having long running txns continuously, the async replicas would eventually fall behind more and more. Is there a way we can send the WAL records to both sync and async replicas together but the async replicas won't apply those WAL records until primary tells the standbys that quorum commit is obtained? If the quorum commit isn't obtained by the primary, the async replicas can ignore to apply the WAL records and discard them. Regards, Bharath Rupireddy.
Re: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers
On Sat, 2022-01-08 at 00:13 +0530, Bharath Rupireddy wrote: > If there are long running txns on the primary and the async standbys > were to wait until quorum commit from sync standbys, won't they fall > behind the primary by too much? No, because replication is based on LSNs, not transactions. With the proposed change: an LSN can be replicated to all sync replicas as soon as it's durable on the primary; and an LSN can be replicated to all async replicas as soon as it's durable on the primary *and* the sync rep quorum is satisfied. Regards, Jeff Davis
Re: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers
Hi, On 2022-01-06 23:24:40 -0800, Jeff Davis wrote: > It feels like the sync quorum should always be ahead of the async > replicas. Unless I'm missing a use case, or there is some kind of > performance gotcha. I don't see how it can *not* cause a major performance / latency gotcha. You're deliberately delaying replication after all? Synchronous replication doesn't guarantee *anything* about the ability for to fail over for other replicas. Nor would it after what's proposed here - another sync replica would still not be guaranteed to be able to follow the newly promoted primary. To me this just sounds like trying to shoehorn something into syncrep that it's not made for. Greetings, Andres Freund
Re: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers
On Fri, 2022-01-07 at 12:22 -0800, Andres Freund wrote: > I don't see how it can *not* cause a major performance / latency > gotcha. You're deliberately delaying replication after all? Are there use cases where someone wants sync rep, and also wants their read replicas to get ahead of the sync rep quorum? If the use case doesn't exist, it doesn't make sense to talk about how well it performs. > another sync replica would still not be guaranteed to be able to > follow the > newly promoted primary. If you only promote the furthest-ahead sync replica (which is what you should be doing if you have quorum commit), why wouldn't that work? > To me this just sounds like trying to shoehorn something into syncrep > that > it's not made for. What *is* sync rep made for? The only justification in the docs is around durability: "[sync rep] extends that standard level of durability offered by a transaction commit... [sync rep] can provide a much higher level of durability..." If we take that at face value, then it seems logical to say that async read replicas should not get ahead of sync replicas. Regards, Jeff Davis
Re: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers
Hi, On 2022-01-07 14:36:46 -0800, Jeff Davis wrote: > On Fri, 2022-01-07 at 12:22 -0800, Andres Freund wrote: > > I don't see how it can *not* cause a major performance / latency > > gotcha. You're deliberately delaying replication after all? > > Are there use cases where someone wants sync rep, and also wants their > read replicas to get ahead of the sync rep quorum? Yes. Not in the sense of being ahead of the sync replicas, but in the sense of being as cought up as possible, and to keep the lost WAL in case of crashes as low as possible. > > another sync replica would still not be guaranteed to be able to > > follow the > > newly promoted primary. > > If you only promote the furthest-ahead sync replica (which is what you > should be doing if you have quorum commit), why wouldn't that work? Remove "sync" from the above sentence, and the sentence holds true for combinations of sync/async replicas as well. > > To me this just sounds like trying to shoehorn something into syncrep > > that > > it's not made for. > > What *is* sync rep made for? > > The only justification in the docs is around durability: > > "[sync rep] extends that standard level of durability offered by a > transaction commit... [sync rep] can provide a much higher level of > durability..." What is being proposed here doesn't increase durability. It *reduces* it - it's less likely that WAL is replicated before a crash. This is a especially relevant in cases where synchronous_commit=on vs local is used selectively - after this change the durability of local changes is very substantially *reduced* because they have to wait for the sync replicas before also replicated to async replicas, but the COMMIT doesn't wait for replication. So this "feature" just reduces the durability of such commits. The performance overhead of syncrep is high enough that plenty real-world usages cannot afford to use it for all transactions. And that's normally fine from a business logic POV - often the majority of changes aren't that important. It's non-trivial from an application implementation POV though, but that's imo a separate concern. > If we take that at face value, then it seems logical to say that async > read replicas should not get ahead of sync replicas. I don't see that. This presumes that WAL replicated to async replicas is somehow bad. But pg_rewind exist, async replicas can be promoted and WAL from the async replicas can be transferred to the synchronous replicas if only those should be promoted. Greetings, Andres Freund
Re: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers
On Fri, 2022-01-07 at 14:54 -0800, Andres Freund wrote: > > If you only promote the furthest-ahead sync replica (which is what > > you > > should be doing if you have quorum commit), why wouldn't that work? > > Remove "sync" from the above sentence, and the sentence holds true > for > combinations of sync/async replicas as well. Technically that's true, but it seems like a bit of a strange use case. I would think people doing that would just include those async replicas in the sync quorum instead. The main case I can think of for a mix of sync and async replicas are if they are just managed differently. For instance, the sync replica quorum is managed for a core part of the system, strategically allocated on good hardware in different locations to minimize the chance of dependent failures; while the async read replicas are optional for taking load off the primary, and may appear/disappear in whatever location and on whatever hardware is most convenient. But if an async replica can get ahead of the sync rep quorum, then the most recent transactions can appear in query results, so that means the WAL shouldn't be lost, and the async read replicas become a part of the durability model. If the async read replica can't be promoted because it's not suitable (due to location, hardware, whatever), then you need to frantically copy the final WAL records out to an instance in the sync rep quorum. That requires extra ceremony for every failover, and might be dubious depending on how safe the WAL on your async read replicas is, and whether there are dependent failure risks. Yeah, I guess there could be some use case woven amongst those caveats, but I'm not sure if anyone is actually doing that combination of things safely today. If someone is, it would be interesting to know more about that use case. The proposal in this thread is quite a bit simpler: manage your sync quorum and your async read replicas separately, and keep the sync rep quorum ahead. > > > To me this just sounds like trying to shoehorn something into > > > syncrep > > > that > > > it's not made for. > > > > What *is* sync rep made for? This was a sincere question and an answer would be helpful. I think many of the discussions about sync rep get derailed because people have different ideas about when and how it should be used, and the documentation is pretty light. > This is a especially relevant in cases where synchronous_commit=on vs > local is > used selectively That's an interesting point. However, it's hard for me to reason about "kinda durable" and "a little more durable" and I'm not sure how many people would care about that distinction. > I don't see that. This presumes that WAL replicated to async replicas > is > somehow bad. Simple case: primary and async read replica are in the same server rack. Sync replicas are geographically distributed with quorum commit. Read replica gets the WAL first (because it's closest), starts answering queries that include that WAL, and then the entire rack catches fire. Now you've returned results to the client, but lost the transactions. Regards, Jeff Davis
Re: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers
On Fri, 2022-01-07 at 14:54 -0800, Andres Freund wrote:
> > If you only promote the furthest-ahead sync replica (which is what
> > you
> > should be doing if you have quorum commit), why wouldn't that work?
>
> Remove "sync" from the above sentence, and the sentence holds true
> for
> combinations of sync/async replicas as well.
Technically that's true, but it seems like a bit of a strange use case.
I would think people doing that would just include those async replicas
in the sync quorum instead.
The main case I can think of for a mix of sync and async replicas are
if they are just managed differently. For instance, the sync replica
quorum is managed for a core part of the system, strategically
allocated on good hardware in different locations to minimize the
chance of dependent failures; while the async read replicas are
optional for taking load off the primary, and may appear/disappear in
whatever location and on whatever hardware is most convenient.
But if an async replica can get ahead of the sync rep quorum, then the
most recent transactions can appear in query results, so that means the
WAL shouldn't be lost, and the async read replicas become a part of the
durability model.
If the async read replica can't be promoted because it's not suitable
(due to location, hardware, whatever), then you need to frantically
copy the final WAL records out to an instance in the sync rep quorum.
That requires extra ceremony for every failover, and might be dubious
depending on how safe the WAL on your async read replicas is, and
whether there are dependent failure risks.
Yeah, I guess there could be some use case woven amongst those caveats,
but I'm not sure if anyone is actually doing that combination of things
safely today. If someone is, it would be interesting to know more about
that use case.
The proposal in this thread is quite a bit simpler: manage your sync
quorum and your async read replicas separately, and keep the sync rep
quorum ahead.
> > > To me this just sounds like trying to shoehorn something into
> > > syncrep
> > > that
> > > it's not made for.
> >
> > What *is* sync rep made for?
This was a sincere question and an answer would be helpful. I think
many of the discussions about sync rep get derailed because people have
different ideas about when and how it should be used, and the
documentation is pretty light.
> This is a especially relevant in cases where synchronous_commit=on vs
> local is
> used selectively
That's an interesting point.
However, it's hard for me to reason about "kinda durable" and "a little
more durable" and I'm not sure how many people would care about that
distinction.
> I don't see that. This presumes that WAL replicated to async replicas
> is
> somehow bad.
Simple case: primary and async read replica are in the same server
rack. Sync replicas are geographically distributed with quorum commit.
Read replica gets the WAL first (because it's closest), starts
answering queries that include that WAL, and then the entire rack
catches fire. Now you've returned results to the client, but lost the
transactions.
Regards,
Jeff Davis
Re: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers
At Fri, 7 Jan 2022 09:44:15 -0800, SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote in > On Fri, Jan 7, 2022 at 12:27 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> > wrote: > > One is to serialize WAL sending (of course it is unacceptable at all) > > or aotehr is to send WAL to all standbys at once then make the > > decision after making sure receiving replies from all standbys (this > > is no longer quorum commit in another sense..) > > > > There is no need to serialize sending the WAL among sync standbys. The only > serialization required is first to all the sync replicas and then to sync > replicas if any. Once an LSN is quorum committed, no failover subsystem > initiates an automatic failover such that the LSN is lost (data loss) Sync standbys on PostgreSQL is ex post facto. When a certain set of standbys have first reported catching-up for a commit, they are called "sync standbys". We can maintain a fixed set of sync standbys based on the set of sync-standbys at a past commits, but that implies performance degradation even if not a single standby is gone. If we send WAL only to the fixed-set of sync standbys, when any of the standbys is gone, the primary is forced to wait until some timeout expires. The same commit would finish immediately if WAL had been sent also to out-of-quorum standbys. > > So I'm afraid that there's no sensible solution to avoid the > > hiding-forerunner problem on quorum commit. > > Could you elaborate on the problem here? If a primary have received response for LSN=X from N standbys, that fact doesn't guarantee that none of the other standbys reached the same LSN. If one of the yet-unresponded standbys already reached LSN=X+10 but its response does not arrived to the primary for some reasons, the true-fastest standby is hiding from primary. Even if the primary examines the responses from all standbys, it is uncertain if the responses reflect the truly current state of the standbys. Thus if we want to guarantee that no unresponded standby is going beyond LSN=X, there's no means other than we refrain from sending WAL beyond X. In that case, we need to serialize the period from WAL-sending to response-reception, which would lead to critical performance degradation. regards. -- Kyotaro Horiguchi NTT Open Source Software Center
Allow async standbys wait for sync replication (was: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers)
On Thu, Jan 6, 2022 at 1:29 PM SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com> wrote: > > Consider a cluster formation where we have a Primary(P), Sync Replica(S1), and multiple async replicas for disaster recoveryand read scaling (within the region and outside the region). In this setup, S1 is the preferred failover target inan event of the primary failure. When a transaction is committed on the primary, it is not acknowledged to the client untilthe primary gets an acknowledgment from the sync standby that the WAL is flushed to the disk (assume synchrnous_commitconfiguration is remote_flush). However, walsenders corresponds to the async replica on the primary don'twait for the flush acknowledgment from the primary and send the WAL to the async standbys (also any logical replication/decodingclients). So it is possible for the async replicas and logical client ahead of the sync replica. If afailover is initiated in such a scenario, to bring the formation into a healthy state we have to either > > run the pg_rewind on the async replicas for them to reconnect with the new primary or > collect the latest WAL across the replicas and feed the standby. > > Both these operations are involved, error prone, and can cause multiple minutes of downtime if done manually. In addition,there is a window where the async replicas can show the data that was neither acknowledged to the client nor committedon standby. Logical clients if they are ahead may need to reseed the data as no easy rewind option for them. > > I would like to propose a GUC send_Wal_after_quorum_committed which when set to ON, walsenders corresponds to async standbysand logical replication workers wait until the LSN is quorum committed on the primary before sending it to the standby.This not only simplifies the post failover steps but avoids unnecessary downtime for the async replicas. Thoughts? Thanks Satya and others for the inputs. Here's the v1 patch that basically allows async wal senders to wait until the sync standbys report their flush lsn back to the primary. Please let me know your thoughts. I've done pgbench testing to see if the patch causes any problems. I ran tests two times, there isn't much difference in the txns per seconds (tps), although there's a delay in the async standby receiving the WAL, after all, that's the feature we are pursuing. [1] HEAD or WITHOUT PATCH: ./pgbench -c 10 -t 500 -P 10 testdb transaction type: <builtin: TPC-B (sort of)> scaling factor: 100 query mode: simple number of clients: 10 number of threads: 1 number of transactions per client: 500 number of transactions actually processed: 5000/5000 latency average = 247.395 ms latency stddev = 74.409 ms initial connection time = 13.622 ms tps = 39.713114 (without initial connection time) PATCH: ./pgbench -c 10 -t 500 -P 10 testdb transaction type: <builtin: TPC-B (sort of)> scaling factor: 100 query mode: simple number of clients: 10 number of threads: 1 number of transactions per client: 500 number of transactions actually processed: 5000/5000 latency average = 251.757 ms latency stddev = 72.846 ms initial connection time = 13.025 ms tps = 39.315862 (without initial connection time) TEST SETUP: primary in region 1 async standby 1 in the same region as that of the primary region 1 i.e. close to primary sync standby 1 in region 2 sync standby 2 in region 3 an archive location in a region different from the primary and standbys regions, region 4 Note that I intentionally kept sync standbys in regions far from primary because it allows sync standbys to receive WAL a bit late by default, which works well for our testing. PGBENCH SETUP: ./psql -d postgres -c "drop database testdb" ./psql -d postgres -c "create database testdb" ./pgbench -i -s 100 testdb ./psql -d testdb -c "\dt" ./psql -d testdb -c "SELECT pg_size_pretty(pg_database_size('testdb'))" ./pgbench -c 10 -t 500 -P 10 testdb Regards, Bharath Rupireddy.
Attachment
Re: Allow async standbys wait for sync replication (was: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers)
On Fri, Feb 25, 2022 at 08:31:37PM +0530, Bharath Rupireddy wrote: > Thanks Satya and others for the inputs. Here's the v1 patch that > basically allows async wal senders to wait until the sync standbys > report their flush lsn back to the primary. Please let me know your > thoughts. I haven't had a chance to look too closely yet, but IIUC this adds a new function that waits for synchronous replication. This new function essentially spins until the synchronous LSN has advanced. I don't think it's a good idea to block sending any WAL like this. AFAICT it is possible that there will be a lot of synchronously replicated WAL that we can send, and it might just be the last several bytes that cannot yet be replicated to the asynchronous standbys. І believe this patch will cause the server to avoid sending _any_ WAL until the synchronous LSN advances. Perhaps we should instead just choose the SendRqstPtr based on the current synchronous LSN. Presumably there are other things we'd need to consider, but in general, I think we ought to send as much WAL as possible for a given call to XLogSendPhysical(). > I've done pgbench testing to see if the patch causes any problems. I > ran tests two times, there isn't much difference in the txns per > seconds (tps), although there's a delay in the async standby receiving > the WAL, after all, that's the feature we are pursuing. I'm curious what a longer pgbench run looks like when the synchronous replicas are in the same region. That is probably a more realistic use-case. -- Nathan Bossart Amazon Web Services: https://aws.amazon.com
Re: Allow async standbys wait for sync replication (was: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers)
Hello, On 2/25/22 11:38 AM, Nathan Bossart wrote: > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you canconfirm the sender and know the content is safe. > > > > On Fri, Feb 25, 2022 at 08:31:37PM +0530, Bharath Rupireddy wrote: >> Thanks Satya and others for the inputs. Here's the v1 patch that >> basically allows async wal senders to wait until the sync standbys >> report their flush lsn back to the primary. Please let me know your >> thoughts. > I haven't had a chance to look too closely yet, but IIUC this adds a new > function that waits for synchronous replication. This new function > essentially spins until the synchronous LSN has advanced. > > I don't think it's a good idea to block sending any WAL like this. AFAICT > it is possible that there will be a lot of synchronously replicated WAL > that we can send, and it might just be the last several bytes that cannot > yet be replicated to the asynchronous standbys. І believe this patch will > cause the server to avoid sending _any_ WAL until the synchronous LSN > advances. > > Perhaps we should instead just choose the SendRqstPtr based on the current > synchronous LSN. Presumably there are other things we'd need to consider, > but in general, I think we ought to send as much WAL as possible for a > given call to XLogSendPhysical(). I think you're right that we'll avoid sending any WAL until sync_lsn advances. We could setup a contrived situation where the async-walsender never advances because it terminates before the flush_lsn of the synchronous_node catches up. And when the async-walsender restarts, it'll start with the latest flushed on the primary and we could go into a perpetual loop. I took a look at the patch and tested basic streaming with async replicas ahead of the synchronous standby and with logical clients as well and it works as expected. > > ereport(LOG, > (errmsg("async standby WAL sender with request LSN %X/%X is waiting as sync standbys are ahead with flush LSN %X/%X", > LSN_FORMAT_ARGS(flushLSN), LSN_FORMAT_ARGS(sendRqstPtr)), > errhidestmt(true))); I think this log formatting is incorrect. s/sync standbys are ahead/sync standbys are behind/ and I think you need to swap flushLsn and sendRqstPtr When a walsender is waiting for the lsn on the synchronous replica to advance and a database stop is issued to the writer, the pg_ctl stop isn't able to proceed and the database seems to never shutdown. > Assert(priority >= 0); What's the point of the assert here? Also the comments/code refer to AsyncStandbys, however it's also used for logical clients, which may or may not be standbys. Don't feel too strongly about the naming here but something to note. > if (!ShouldWaitForSyncRepl()) > return; > ... > for (;;) > { > // rest of work > } If we had a walsender already waiting for an ack, and the conditions of ShouldWaitForSyncRepl() change, such as disabling async_standbys_wait_for_sync_replication or synchronous replication it'll still wait since we never re-check the condition. postgres=# select wait_event from pg_stat_activity where wait_event like 'AsyncWal%'; wait_event -------------------------------------- AsyncWalSenderWaitForSyncReplication AsyncWalSenderWaitForSyncReplication AsyncWalSenderWaitForSyncReplication (3 rows) postgres=# show synchronous_standby_names; synchronous_standby_names --------------------------- (1 row) postgres=# show async_standbys_wait_for_sync_replication; async_standbys_wait_for_sync_replication ------------------------------------------ off (1 row) > LWLockAcquire(SyncRepLock, LW_SHARED); > flushLSN = walsndctl->lsn[SYNC_REP_WAIT_FLUSH]; > LWLockRelease(SyncRepLock); Should we configure this similar to the user's setting of synchronous_commit instead of just flush? (SYNC_REP_WAIT_WRITE, SYNC_REP_WAIT_APPLY) Thanks, John H
Re: Allow async standbys wait for sync replication (was: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers)
On Sat, Feb 26, 2022 at 1:08 AM Nathan Bossart <nathandbossart@gmail.com> wrote: > > On Fri, Feb 25, 2022 at 08:31:37PM +0530, Bharath Rupireddy wrote: > > Thanks Satya and others for the inputs. Here's the v1 patch that > > basically allows async wal senders to wait until the sync standbys > > report their flush lsn back to the primary. Please let me know your > > thoughts. > > I haven't had a chance to look too closely yet, but IIUC this adds a new > function that waits for synchronous replication. This new function > essentially spins until the synchronous LSN has advanced. > > I don't think it's a good idea to block sending any WAL like this. AFAICT > it is possible that there will be a lot of synchronously replicated WAL > that we can send, and it might just be the last several bytes that cannot > yet be replicated to the asynchronous standbys. І believe this patch will > cause the server to avoid sending _any_ WAL until the synchronous LSN > advances. > > Perhaps we should instead just choose the SendRqstPtr based on the current > synchronous LSN. Presumably there are other things we'd need to consider, > but in general, I think we ought to send as much WAL as possible for a > given call to XLogSendPhysical(). A global min LSN of SendRqstPtr of all the sync standbys can be calculated and the async standbys can send WAL up to global min LSN. This is unlike what the v1 patch does i.e. async standbys will wait until the sync standbys report flush LSN back to the primary. Problem with the global min LSN approach is that there can still be a small window where async standbys can get ahead of sync standbys. Imagine async standbys being closer to the primary than sync standbys and if the failover has to happen while the WAL at SendRqstPtr isn't received by the sync standbys, but the async standbys can receive them as they are closer. We hit the same problem that we are trying to solve with this patch. This is the reason, we are waiting till the sync flush LSN as it guarantees more transactional protection. Do you think allowing async standbys optionally wait for either remote write or flush or apply or global min LSN of SendRqstPtr so that users can choose what they want? > > I've done pgbench testing to see if the patch causes any problems. I > > ran tests two times, there isn't much difference in the txns per > > seconds (tps), although there's a delay in the async standby receiving > > the WAL, after all, that's the feature we are pursuing. > > I'm curious what a longer pgbench run looks like when the synchronous > replicas are in the same region. That is probably a more realistic > use-case. We are performing more tests, I will share the results once done. Regards, Bharath Rupireddy.
Re: Allow async standbys wait for sync replication (was: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers)
On Sat, Feb 26, 2022 at 3:22 AM Hsu, John <hsuchen@amazon.com> wrote: > > > On Fri, Feb 25, 2022 at 08:31:37PM +0530, Bharath Rupireddy wrote: > >> Thanks Satya and others for the inputs. Here's the v1 patch that > >> basically allows async wal senders to wait until the sync standbys > >> report their flush lsn back to the primary. Please let me know your > >> thoughts. > > I haven't had a chance to look too closely yet, but IIUC this adds a new > > function that waits for synchronous replication. This new function > > essentially spins until the synchronous LSN has advanced. > > > > I don't think it's a good idea to block sending any WAL like this. AFAICT > > it is possible that there will be a lot of synchronously replicated WAL > > that we can send, and it might just be the last several bytes that cannot > > yet be replicated to the asynchronous standbys. І believe this patch will > > cause the server to avoid sending _any_ WAL until the synchronous LSN > > advances. > > > > Perhaps we should instead just choose the SendRqstPtr based on the current > > synchronous LSN. Presumably there are other things we'd need to consider, > > but in general, I think we ought to send as much WAL as possible for a > > given call to XLogSendPhysical(). > > I think you're right that we'll avoid sending any WAL until sync_lsn > advances. We could setup a contrived situation where the async-walsender > never advances because it terminates before the flush_lsn of the > synchronous_node catches up. And when the async-walsender restarts, > it'll start with the latest flushed on the primary and we could go into > a perpetual loop. The async walsender looks at flush LSN from walsndctl->lsn[SYNC_REP_WAIT_FLUSH]; after it comes up and decides to send the WAL up to it. If there are no sync replicats after it comes up (users can make sync standbys async without postmaster restart because synchronous_standby_names is effective with SIGHUP), then it doesn't wait at all and continues to send WAL. I don't see any problem with it. Am I missing something here? > I took a look at the patch and tested basic streaming with async > replicas ahead of the synchronous standby and with logical clients as > well and it works as expected. Thanks for reviewing and testing the patch. > > ereport(LOG, > > (errmsg("async standby WAL sender with request LSN %X/%X > is waiting as sync standbys are ahead with flush LSN %X/%X", > > LSN_FORMAT_ARGS(flushLSN), > LSN_FORMAT_ARGS(sendRqstPtr)), > > errhidestmt(true))); > > I think this log formatting is incorrect. > s/sync standbys are ahead/sync standbys are behind/ and I think you need > to swap flushLsn and sendRqstPtr I will correct it. "async standby WAL sender with request LSN %X/%X is waiting as sync standbys are ahead with flush LSN %X/%X", LSN_FORMAT_ARGS(sendRqstP), LSN_FORMAT_ARGS(flushLSN). I will think more about having better wording of these messages, any suggestions here? > When a walsender is waiting for the lsn on the synchronous replica to > advance and a database stop is issued to the writer, the pg_ctl stop > isn't able to proceed and the database seems to never shutdown. I too observed this once or twice. It looks like the walsender isn't detecting postmaster death in for (;;) with WalSndWait. Not sure if this is expected or true with other wait-loops in walsender code. Any more thoughts here? > > Assert(priority >= 0); > > What's the point of the assert here? Just for safety. I can remove it as the sync_standby_priority can never be negative. > Also the comments/code refer to AsyncStandbys, however it's also used > for logical clients, which may or may not be standbys. Don't feel too > strongly about the naming here but something to note. I will try to be more informative by adding something like "async standbys and logical replication subscribers". > > if (!ShouldWaitForSyncRepl()) > > return; > > ... > > for (;;) > > { > > // rest of work > > } > > If we had a walsender already waiting for an ack, and the conditions of > ShouldWaitForSyncRepl() change, such as disabling > async_standbys_wait_for_sync_replication or synchronous replication > it'll still wait since we never re-check the condition. Yeah, I will add the checks inside the async walsender wait-loop. > postgres=# select wait_event from pg_stat_activity where wait_event like > 'AsyncWal%'; > wait_event > -------------------------------------- > AsyncWalSenderWaitForSyncReplication > AsyncWalSenderWaitForSyncReplication > AsyncWalSenderWaitForSyncReplication > (3 rows) > > postgres=# show synchronous_standby_names; > synchronous_standby_names > --------------------------- > > (1 row) > > postgres=# show async_standbys_wait_for_sync_replication; > async_standbys_wait_for_sync_replication > ------------------------------------------ > off > (1 row) > > > LWLockAcquire(SyncRepLock, LW_SHARED); > > flushLSN = walsndctl->lsn[SYNC_REP_WAIT_FLUSH]; > > LWLockRelease(SyncRepLock); > > Should we configure this similar to the user's setting of > synchronous_commit instead of just flush? (SYNC_REP_WAIT_WRITE, > SYNC_REP_WAIT_APPLY) As I said upthread, we can allow async standbys optionally wait for either remote write or flush or apply or global min LSN of SendRqstPtr so that users can choose what they want. I'm open to more thoughts here. Regards, Bharath Rupireddy.
Re: Allow async standbys wait for sync replication (was: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers)
On Sat, Feb 26, 2022 at 02:17:50PM +0530, Bharath Rupireddy wrote: > A global min LSN of SendRqstPtr of all the sync standbys can be > calculated and the async standbys can send WAL up to global min LSN. > This is unlike what the v1 patch does i.e. async standbys will wait > until the sync standbys report flush LSN back to the primary. Problem > with the global min LSN approach is that there can still be a small > window where async standbys can get ahead of sync standbys. Imagine > async standbys being closer to the primary than sync standbys and if > the failover has to happen while the WAL at SendRqstPtr isn't received > by the sync standbys, but the async standbys can receive them as they > are closer. We hit the same problem that we are trying to solve with > this patch. This is the reason, we are waiting till the sync flush LSN > as it guarantees more transactional protection. Do you mean that the application of WAL gets ahead on your async standbys or that the writing/flushing of WAL gets ahead? If synchronous_commit is set to 'remote_write' or 'on', I think either approach can lead to situations where the async standbys are ahead of the sync standbys with WAL application. For example, a conflict between WAL replay and a query on your sync standby could delay WAL replay, but the primary will not wait for this conflict to resolve before considering a transaction synchronously replicated and sending it to the async standbys. If writing/flushing WAL gets ahead on async standbys, I think something is wrong with the patch. If you aren't sending WAL to async standbys until it is synchronously replicated to the sync standbys, it should by definition be impossible for this to happen. If you wanted to make sure that WAL was not applied to async standbys before it was applied to sync standbys, I think you'd need to set synchronous_commit to 'remote_apply'. This would ensure that the WAL is replayed on sync standbys before the primary considers the transaction synchronously replicated and sends it to the async standbys. > Do you think allowing async standbys optionally wait for either remote > write or flush or apply or global min LSN of SendRqstPtr so that users > can choose what they want? I'm not sure I follow the difference between "global min LSN of SendRqstPtr" and remote write/flush/apply. IIUC you are saying that we could use the LSN of what is being sent to sync standbys instead of the LSN of what the primary considers synchronously replicated. I don't think we should do that because it provides no guarantee that the WAL has even been sent to the sync standbys before it is sent to the async standbys. For this feature, I think we always need to consider what the primary considers synchronously replicated. My suggested approach doesn't change that. I'm saying that instead of spinning in a loop waiting for the WAL to be synchronously replicated, we just immediately send WAL up to the LSN that is presently known to be synchronously replicated. You do bring up an interesting point, though. Is there a use-case for specifying synchronous_commit='on' but not sending WAL to async replicas until it is synchronously applied? Or alternatively, would anyone want to set synchronous_commit='remote_apply' but send WAL to async standbys as soon as it is written to the sync standbys? My initial reaction is that we should depend on the synchronous replication setup. As long as the primary considers an LSN synchronously replicated, it would be okay to send it to the async standbys. I personally don't think it is worth taking on the extra complexity for that level of configuration just yet. -- Nathan Bossart Amazon Web Services: https://aws.amazon.com
Re: Allow async standbys wait for sync replication (was: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers)
On Sat, Feb 26, 2022 at 9:37 PM Nathan Bossart <nathandbossart@gmail.com> wrote: > > On Sat, Feb 26, 2022 at 02:17:50PM +0530, Bharath Rupireddy wrote: > > A global min LSN of SendRqstPtr of all the sync standbys can be > > calculated and the async standbys can send WAL up to global min LSN. > > This is unlike what the v1 patch does i.e. async standbys will wait > > until the sync standbys report flush LSN back to the primary. Problem > > with the global min LSN approach is that there can still be a small > > window where async standbys can get ahead of sync standbys. Imagine > > async standbys being closer to the primary than sync standbys and if > > the failover has to happen while the WAL at SendRqstPtr isn't received > > by the sync standbys, but the async standbys can receive them as they > > are closer. We hit the same problem that we are trying to solve with > > this patch. This is the reason, we are waiting till the sync flush LSN > > as it guarantees more transactional protection. > > Do you mean that the application of WAL gets ahead on your async standbys > or that the writing/flushing of WAL gets ahead? If synchronous_commit is > set to 'remote_write' or 'on', I think either approach can lead to > situations where the async standbys are ahead of the sync standbys with WAL > application. For example, a conflict between WAL replay and a query on > your sync standby could delay WAL replay, but the primary will not wait for > this conflict to resolve before considering a transaction synchronously > replicated and sending it to the async standbys. > > If writing/flushing WAL gets ahead on async standbys, I think something is > wrong with the patch. If you aren't sending WAL to async standbys until > it is synchronously replicated to the sync standbys, it should by > definition be impossible for this to happen. With the v1 patch [1], the async standbys will never get WAL ahead of sync standbys. That is guaranteed because the walsenders serving async standbys are allowed to send WAL only after the walsenders serving sync standbys receive the synchronous flush LSN. > > Do you think allowing async standbys optionally wait for either remote > > write or flush or apply or global min LSN of SendRqstPtr so that users > > can choose what they want? > > I'm not sure I follow the difference between "global min LSN of > SendRqstPtr" and remote write/flush/apply. IIUC you are saying that we > could use the LSN of what is being sent to sync standbys instead of the LSN > of what the primary considers synchronously replicated. I don't think we > should do that because it provides no guarantee that the WAL has even been > sent to the sync standbys before it is sent to the async standbys. Correct. > For > this feature, I think we always need to consider what the primary considers > synchronously replicated. My suggested approach doesn't change that. I'm > saying that instead of spinning in a loop waiting for the WAL to be > synchronously replicated, we just immediately send WAL up to the LSN that > is presently known to be synchronously replicated. As I said above, v1 patch does that i.e. async standbys wait until the sync standbys update their flush LSN. Flush LSN is this - flushLSN = walsndctl->lsn[SYNC_REP_WAIT_FLUSH]; which gets updated in SyncRepReleaseWaiters. Async standbys with their SendRqstPtr will wait in XLogSendPhysical or XLogSendLogical until SendRqstPtr <= flushLSN. I will address review comments raised by Hsu, John and send the updated patch for further review. Thanks. [1] https://www.postgresql.org/message-id/CALj2ACVUa8WddVDS20QmVKNwTbeOQqy4zy59NPzh8NnLipYZGw%40mail.gmail.com Regards, Bharath Rupireddy.
Re: Allow async standbys wait for sync replication (was: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers)
On Mon, Feb 28, 2022 at 06:45:51PM +0530, Bharath Rupireddy wrote: > On Sat, Feb 26, 2022 at 9:37 PM Nathan Bossart <nathandbossart@gmail.com> wrote: >> For >> this feature, I think we always need to consider what the primary considers >> synchronously replicated. My suggested approach doesn't change that. I'm >> saying that instead of spinning in a loop waiting for the WAL to be >> synchronously replicated, we just immediately send WAL up to the LSN that >> is presently known to be synchronously replicated. > > As I said above, v1 patch does that i.e. async standbys wait until the > sync standbys update their flush LSN. > > Flush LSN is this - flushLSN = walsndctl->lsn[SYNC_REP_WAIT_FLUSH]; > which gets updated in SyncRepReleaseWaiters. > > Async standbys with their SendRqstPtr will wait in XLogSendPhysical or > XLogSendLogical until SendRqstPtr <= flushLSN. My feedback is specifically about this behavior. I don't think we should spin in XLogSend*() waiting for an LSN to be synchronously replicated. I think we should just choose the SendRqstPtr based on what is currently synchronously replicated. -- Nathan Bossart Amazon Web Services: https://aws.amazon.com
Re: Allow async standbys wait for sync replication (was: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers)
> The async walsender looks at flush LSN from
> walsndctl->lsn[SYNC_REP_WAIT_FLUSH]; after it comes up and decides to
> send the WAL up to it. If there are no sync replicats after it comes
> up (users can make sync standbys async without postmaster restart
> because synchronous_standby_names is effective with SIGHUP), then it
> doesn't wait at all and continues to send WAL. I don't see any problem
> with it. Am I missing something here? Assuming I understand the code correctly, we have: > SendRqstPtr = GetFlushRecPtr(NULL); In this contrived example let's say walsndctl->lsn[SYNC_REP_WAIT_FLUSH] is always 60s behind GetFlushRecPtr() and for whatever reason, if the walsender hasn't replicated anything in 30s it'll terminate and re-connect. If GetFlushRecPtr() keeps advancing and is always 60s ahead of the sync LSN's then we would never stream anything, even though it's advanced past what is safe to stream previously.
> I will correct it. "async standby WAL sender with request LSN %X/%X is > waiting as sync standbys are ahead with flush LSN %X/%X", > LSN_FORMAT_ARGS(sendRqstP), LSN_FORMAT_ARGS(flushLSN). I will think > more about having better wording of these messages, any suggestions > here?
"async standby WAL sender with request LSN %X/%X is waiting for sync standbys at LSN %X/%X to advance past it" Not sure if that's really clearer...
> I too observed this once or twice. It looks like the walsender isn't > detecting postmaster death in for (;;) with WalSndWait. Not sure if > this is expected or true with other wait-loops in walsender code. Any > more thoughts here? Unfortunately I haven't had a chance to dig into it more although iirc I hit it fairly often. Thanks, John H
Re: Allow async standbys wait for sync replication (was: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers)
On Tue, Mar 1, 2022 at 12:27 AM Nathan Bossart <nathandbossart@gmail.com> wrote: > > On Mon, Feb 28, 2022 at 06:45:51PM +0530, Bharath Rupireddy wrote: > > On Sat, Feb 26, 2022 at 9:37 PM Nathan Bossart <nathandbossart@gmail.com> wrote: > >> For > >> this feature, I think we always need to consider what the primary considers > >> synchronously replicated. My suggested approach doesn't change that. I'm > >> saying that instead of spinning in a loop waiting for the WAL to be > >> synchronously replicated, we just immediately send WAL up to the LSN that > >> is presently known to be synchronously replicated. > > > > As I said above, v1 patch does that i.e. async standbys wait until the > > sync standbys update their flush LSN. > > > > Flush LSN is this - flushLSN = walsndctl->lsn[SYNC_REP_WAIT_FLUSH]; > > which gets updated in SyncRepReleaseWaiters. > > > > Async standbys with their SendRqstPtr will wait in XLogSendPhysical or > > XLogSendLogical until SendRqstPtr <= flushLSN. > > My feedback is specifically about this behavior. I don't think we should > spin in XLogSend*() waiting for an LSN to be synchronously replicated. I > think we should just choose the SendRqstPtr based on what is currently > synchronously replicated. Do you mean something like the following? /* Main loop of walsender process that streams the WAL over Copy messages. */ static void WalSndLoop(WalSndSendDataCallback send_data) { /* * Loop until we reach the end of this timeline or the client requests to * stop streaming. */ for (;;) { if (am_async_walsender && there_are_sync_standbys) { XLogRecPtr SendRqstLSN; XLogRecPtr SyncFlushLSN; SendRqstLSN = GetFlushRecPtr(NULL); LWLockAcquire(SyncRepLock, LW_SHARED); SyncFlushLSN = walsndctl->lsn[SYNC_REP_WAIT_FLUSH]; LWLockRelease(SyncRepLock); if (SendRqstLSN > SyncFlushLSN) continue; } if (!pq_is_send_pending()) send_data(); /* THIS IS WHERE XLogSendPhysical or XLogSendLogical gets called */ else WalSndCaughtUp = false; } Regards, Bharath Rupireddy.
Re: Allow async standbys wait for sync replication (was: Disallow quorum uncommitted (with synchronous standbys) txns in logical replication subscribers)
On Tue, Mar 01, 2022 at 11:10:09AM +0530, Bharath Rupireddy wrote: > On Tue, Mar 1, 2022 at 12:27 AM Nathan Bossart <nathandbossart@gmail.com> wrote: >> My feedback is specifically about this behavior. I don't think we should >> spin in XLogSend*() waiting for an LSN to be synchronously replicated. I >> think we should just choose the SendRqstPtr based on what is currently >> synchronously replicated. > > Do you mean something like the following? > > /* Main loop of walsender process that streams the WAL over Copy messages. */ > static void > WalSndLoop(WalSndSendDataCallback send_data) > { > /* > * Loop until we reach the end of this timeline or the client requests to > * stop streaming. > */ > for (;;) > { > if (am_async_walsender && there_are_sync_standbys) > { > XLogRecPtr SendRqstLSN; > XLogRecPtr SyncFlushLSN; > > SendRqstLSN = GetFlushRecPtr(NULL); > LWLockAcquire(SyncRepLock, LW_SHARED); > SyncFlushLSN = walsndctl->lsn[SYNC_REP_WAIT_FLUSH]; > LWLockRelease(SyncRepLock); > > if (SendRqstLSN > SyncFlushLSN) > continue; > } Not quite. Instead of "continue", I would set SendRqstLSN to SyncFlushLSN so that the WAL sender only sends up to the current synchronously replicated LSN. TBH there are probably other things that need to be considered (e.g., how do we ensure that the WAL sender sends the rest once it is replicated?), but I still think we should avoid spinning in the WAL sender waiting for WAL to be replicated. -- Nathan Bossart Amazon Web Services: https://aws.amazon.com
(Now I understand what "async" mean here..) At Mon, 28 Feb 2022 22:05:28 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in > On Tue, Mar 01, 2022 at 11:10:09AM +0530, Bharath Rupireddy wrote: > > On Tue, Mar 1, 2022 at 12:27 AM Nathan Bossart <nathandbossart@gmail.com> wrote: > >> My feedback is specifically about this behavior. I don't think we should > >> spin in XLogSend*() waiting for an LSN to be synchronously replicated. I > >> think we should just choose the SendRqstPtr based on what is currently > >> synchronously replicated. > > > > Do you mean something like the following? > > > > /* Main loop of walsender process that streams the WAL over Copy messages. */ > > static void > > WalSndLoop(WalSndSendDataCallback send_data) > > { > > /* > > * Loop until we reach the end of this timeline or the client requests to > > * stop streaming. > > */ > > for (;;) > > { > > if (am_async_walsender && there_are_sync_standbys) > > { > > XLogRecPtr SendRqstLSN; > > XLogRecPtr SyncFlushLSN; > > > > SendRqstLSN = GetFlushRecPtr(NULL); > > LWLockAcquire(SyncRepLock, LW_SHARED); > > SyncFlushLSN = walsndctl->lsn[SYNC_REP_WAIT_FLUSH]; > > LWLockRelease(SyncRepLock); > > > > if (SendRqstLSN > SyncFlushLSN) > > continue; > > } The current trend is energy-savings. We never add a "wait for some fixed time then exit if the condition makes, otherwise repeat" loop for this kind of purpose where there's no guarantee that the loop exits quite shortly. Concretely we ought to rely on condition variables to do that. > Not quite. Instead of "continue", I would set SendRqstLSN to SyncFlushLSN > so that the WAL sender only sends up to the current synchronously I'm not sure, but doesn't that makes walsender falsely believes it have caught up to the bleeding edge of WAL? > replicated LSN. TBH there are probably other things that need to be > considered (e.g., how do we ensure that the WAL sender sends the rest once > it is replicated?), but I still think we should avoid spinning in the WAL > sender waiting for WAL to be replicated. It seems to me it would be something similar to SyncRepReleaseWaiters(). Or it could be possible to consolidate this feature into the function, I'm not sure, though. regards. -- Kyotaro Horiguchi NTT Open Source Software Center
On Tue, Mar 01, 2022 at 04:34:31PM +0900, Kyotaro Horiguchi wrote: > At Mon, 28 Feb 2022 22:05:28 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in >> replicated LSN. TBH there are probably other things that need to be >> considered (e.g., how do we ensure that the WAL sender sends the rest once >> it is replicated?), but I still think we should avoid spinning in the WAL >> sender waiting for WAL to be replicated. > > It seems to me it would be something similar to > SyncRepReleaseWaiters(). Or it could be possible to consolidate this > feature into the function, I'm not sure, though. Yes, perhaps the synchronous replication framework will need to alert WAL senders when the syncrep LSN advances so that the WAL is sent to the async standbys. I'm glossing over the details, but I think that should be the general direction. -- Nathan Bossart Amazon Web Services: https://aws.amazon.com
On Tue, Mar 1, 2022 at 10:35 PM Nathan Bossart <nathandbossart@gmail.com> wrote: > > On Tue, Mar 01, 2022 at 04:34:31PM +0900, Kyotaro Horiguchi wrote: > > At Mon, 28 Feb 2022 22:05:28 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in > >> replicated LSN. TBH there are probably other things that need to be > >> considered (e.g., how do we ensure that the WAL sender sends the rest once > >> it is replicated?), but I still think we should avoid spinning in the WAL > >> sender waiting for WAL to be replicated. > > > > It seems to me it would be something similar to > > SyncRepReleaseWaiters(). Or it could be possible to consolidate this > > feature into the function, I'm not sure, though. > > Yes, perhaps the synchronous replication framework will need to alert WAL > senders when the syncrep LSN advances so that the WAL is sent to the async > standbys. I'm glossing over the details, but I think that should be the > general direction. It's doable. But we can't avoid async walsenders waiting for the flush LSN even if we take the SyncRepReleaseWaiters() approach right? I'm not sure (at this moment) what's the biggest advantage of this approach i.e. (1) backends waking up walsenders after flush lsn is updated vs (2) walsenders keep looking for the new flush lsn. > >> My feedback is specifically about this behavior. I don't think we should > >> spin in XLogSend*() waiting for an LSN to be synchronously replicated. I > >> think we should just choose the SendRqstPtr based on what is currently > >> synchronously replicated. > > > > Do you mean something like the following? > > > > /* Main loop of walsender process that streams the WAL over Copy messages. */ > > static void > > WalSndLoop(WalSndSendDataCallback send_data) > > { > > /* > > * Loop until we reach the end of this timeline or the client requests to > > * stop streaming. > > */ > > for (;;) > > { > > if (am_async_walsender && there_are_sync_standbys) > > { > > XLogRecPtr SendRqstLSN; > > XLogRecPtr SyncFlushLSN; > > > > SendRqstLSN = GetFlushRecPtr(NULL); > > LWLockAcquire(SyncRepLock, LW_SHARED); > > SyncFlushLSN = walsndctl->lsn[SYNC_REP_WAIT_FLUSH]; > > LWLockRelease(SyncRepLock); > > > > if (SendRqstLSN > SyncFlushLSN) > > continue; > > } > > Not quite. Instead of "continue", I would set SendRqstLSN to SyncFlushLSN > so that the WAL sender only sends up to the current synchronously > replicated LSN. TBH there are probably other things that need to be > considered (e.g., how do we ensure that the WAL sender sends the rest once > it is replicated?), but I still think we should avoid spinning in the WAL > sender waiting for WAL to be replicated. I did some more analysis on the above point: we can let XLogSendPhysical know up to which it can send WAL (SendRqstLSN). But, XLogSendLogical reads the WAL using XLogReadRecord mechanism with read_local_xlog_page page_read callback to which we can't really say SendRqstLSN. May be we have to do something like below: XLogSendPhysical: /* Figure out how far we can safely send the WAL. */ if (am_async_walsender && there_are_sync_standbys) { LWLockAcquire(SyncRepLock, LW_SHARED); SendRqstPtr = WalSndCtl->lsn[SYNC_REP_WAIT_FLUSH]; LWLockRelease(SyncRepLock); } /* Existing code path to determine SendRqstPtr */ else if (sendTimeLineIsHistoric) { } else if (am_cascading_walsender) { } else { /* * Streaming the current timeline on a primary. } XLogSendLogical: if (am_async_walsender && there_are_sync_standbys) { XLogRecPtr SendRqstLSN; XLogRecPtr SyncFlushLSN; SendRqstLSN = GetFlushRecPtr(NULL); LWLockAcquire(SyncRepLock, LW_SHARED); SyncFlushLSN = WalSndCtl->lsn[SYNC_REP_WAIT_FLUSH]; LWLockRelease(SyncRepLock); if (SendRqstLSN > SyncFlushLSN) return; } On Tue, Mar 1, 2022 at 7:35 AM Hsu, John <hsuchen@amazon.com> wrote: > > I too observed this once or twice. It looks like the walsender isn't > > detecting postmaster death in for (;;) with WalSndWait. Not sure if > > this is expected or true with other wait-loops in walsender code. Any > > more thoughts here? > > Unfortunately I haven't had a chance to dig into it more although iirc I hit it fairly often. I think I got what the issue is. Below does the trick. if (got_STOPPING) proc_exit(0); * If the server is shut down, checkpointer sends us * PROCSIG_WALSND_INIT_STOPPING after all regular backends have exited. I will take care of this in the next patch once the approach we take for this feature gets finalized. Regards, Bharath Rupireddy.
On Tue, Mar 01, 2022 at 11:09:57PM +0530, Bharath Rupireddy wrote: > On Tue, Mar 1, 2022 at 10:35 PM Nathan Bossart <nathandbossart@gmail.com> wrote: >> Yes, perhaps the synchronous replication framework will need to alert WAL >> senders when the syncrep LSN advances so that the WAL is sent to the async >> standbys. I'm glossing over the details, but I think that should be the >> general direction. > > It's doable. But we can't avoid async walsenders waiting for the flush > LSN even if we take the SyncRepReleaseWaiters() approach right? I'm > not sure (at this moment) what's the biggest advantage of this > approach i.e. (1) backends waking up walsenders after flush lsn is > updated vs (2) walsenders keep looking for the new flush lsn. I think there are a couple of advantages. For one, spinning is probably not the best from a resource perspective. There is no guarantee that the desired SendRqstPtr will ever be synchronously replicated, in which case the WAL sender would spin forever. Also, this approach might fit in better with the existing synchronous replication framework. When a WAL sender realizes that it can't send up to the current "flush" LSN because it's not synchronously replicated, it will request to be alerted when it is. In the meantime, it can send up to the latest syncrep LSN so that the async standby is as up-to-date as possible. -- Nathan Bossart Amazon Web Services: https://aws.amazon.com
On Wed, Mar 2, 2022 at 2:57 AM Nathan Bossart <nathandbossart@gmail.com> wrote: > > On Tue, Mar 01, 2022 at 11:09:57PM +0530, Bharath Rupireddy wrote: > > On Tue, Mar 1, 2022 at 10:35 PM Nathan Bossart <nathandbossart@gmail.com> wrote: > >> Yes, perhaps the synchronous replication framework will need to alert WAL > >> senders when the syncrep LSN advances so that the WAL is sent to the async > >> standbys. I'm glossing over the details, but I think that should be the > >> general direction. > > > > It's doable. But we can't avoid async walsenders waiting for the flush > > LSN even if we take the SyncRepReleaseWaiters() approach right? I'm > > not sure (at this moment) what's the biggest advantage of this > > approach i.e. (1) backends waking up walsenders after flush lsn is > > updated vs (2) walsenders keep looking for the new flush lsn. > > I think there are a couple of advantages. For one, spinning is probably > not the best from a resource perspective. Just to be on the same page - by spinning do you mean - the async walsender waiting for the sync flushLSN in a for-loop with WaitLatch()? > There is no guarantee that the > desired SendRqstPtr will ever be synchronously replicated, in which case > the WAL sender would spin forever. The async walsenders will not exactly wait for SendRqstPtr LSN to be the flush lsn. Say, SendRqstPtr is 100 and the current sync FlushLSN is 95, they will have to wait until FlushLSN moves ahead of SendRqstPtr i.e. SendRqstPtr <= FlushLSN. I can't think of a scenario (right now) that doesn't move the sync FlushLSN at all. If there's such a scenario, shouldn't it be treated as a sync replication bug? > Also, this approach might fit in better > with the existing synchronous replication framework. When a WAL sender > realizes that it can't send up to the current "flush" LSN because it's not > synchronously replicated, it will request to be alerted when it is. I think you are referring to the way a backend calls SyncRepWaitForLSN and waits until any one of the walsender sets syncRepState to SYNC_REP_WAIT_COMPLETE in SyncRepWakeQueue. Firstly, SyncRepWaitForLSN blocking i.e. the backend spins/waits in for (;;) loop until its syncRepState becomes SYNC_REP_WAIT_COMPLETE. The backend doesn't do any other work but waits. So, spinning isn't avoided completely. Unless, I'm missing something, the existing syc repl queue (SyncRepQueue) mechanism doesn't avoid spinning in the requestors (backends) SyncRepWaitForLSN or in the walsenders SyncRepWakeQueue. > In the > meantime, it can send up to the latest syncrep LSN so that the async > standby is as up-to-date as possible. Just to be clear, there can exist the following scenarios: Firstly, SendRqstPtr is up to which a walsender can send WAL, it's not the scenario 1: async SendRqstPtr is 100, sync FlushLSN is 95 - async standbys will wait until the FlushLSN moves ahead, once SendRqstPtr <= FlushLSN, it sends out the WAL. scenario 2: async SendRqstPtr is 105, sync FlushLSN is 110 - async standbys will not wait, it just sends out the WAL up to SendRqstPtr i.e. LSN 105. scenario 3, same as scenario 2 but SendRqstPtr and FlushLSN is same: async SendRqstPtr is 105, sync FlushLSN is 105 - async standbys will not wait, it just sends out the WAL up to SendRqstPtr i.e. LSN 105. This way, the async standbys are always as up-to-date as possible with the sync FlushLSN. Are you referring to any other scenarios? Regards, Bharath Rupireddy.
On Wed, Mar 02, 2022 at 09:47:09AM +0530, Bharath Rupireddy wrote: > On Wed, Mar 2, 2022 at 2:57 AM Nathan Bossart <nathandbossart@gmail.com> wrote: >> I think there are a couple of advantages. For one, spinning is probably >> not the best from a resource perspective. > > Just to be on the same page - by spinning do you mean - the async > walsender waiting for the sync flushLSN in a for-loop with > WaitLatch()? Yes. >> Also, this approach might fit in better >> with the existing synchronous replication framework. When a WAL sender >> realizes that it can't send up to the current "flush" LSN because it's not >> synchronously replicated, it will request to be alerted when it is. > > I think you are referring to the way a backend calls SyncRepWaitForLSN > and waits until any one of the walsender sets syncRepState to > SYNC_REP_WAIT_COMPLETE in SyncRepWakeQueue. Firstly, SyncRepWaitForLSN > blocking i.e. the backend spins/waits in for (;;) loop until its > syncRepState becomes SYNC_REP_WAIT_COMPLETE. The backend doesn't do > any other work but waits. So, spinning isn't avoided completely. > > Unless, I'm missing something, the existing syc repl queue > (SyncRepQueue) mechanism doesn't avoid spinning in the requestors > (backends) SyncRepWaitForLSN or in the walsenders SyncRepWakeQueue. My point is that there are existing tools for alerting processes when an LSN is synchronously replicated and for waking up WAL senders. What I am proposing wouldn't involve spinning in XLogSendPhysical() waiting for synchronous replication. Like SyncRepWaitForLSN(), we'd register our LSN in the queue (SyncRepQueueInsert()), but we wouldn't sit in a separate loop waiting to be woken. Instead, SyncRepWakeQueue() would eventually wake up the WAL sender and trigger another iteration of WalSndLoop(). -- Nathan Bossart Amazon Web Services: https://aws.amazon.com
On Sat, Mar 5, 2022 at 1:26 AM Nathan Bossart <nathandbossart@gmail.com> wrote: > > On Wed, Mar 02, 2022 at 09:47:09AM +0530, Bharath Rupireddy wrote: > > On Wed, Mar 2, 2022 at 2:57 AM Nathan Bossart <nathandbossart@gmail.com> wrote: > >> I think there are a couple of advantages. For one, spinning is probably > >> not the best from a resource perspective. > > > > Just to be on the same page - by spinning do you mean - the async > > walsender waiting for the sync flushLSN in a for-loop with > > WaitLatch()? > > Yes. > > >> Also, this approach might fit in better > >> with the existing synchronous replication framework. When a WAL sender > >> realizes that it can't send up to the current "flush" LSN because it's not > >> synchronously replicated, it will request to be alerted when it is. > > > > I think you are referring to the way a backend calls SyncRepWaitForLSN > > and waits until any one of the walsender sets syncRepState to > > SYNC_REP_WAIT_COMPLETE in SyncRepWakeQueue. Firstly, SyncRepWaitForLSN > > blocking i.e. the backend spins/waits in for (;;) loop until its > > syncRepState becomes SYNC_REP_WAIT_COMPLETE. The backend doesn't do > > any other work but waits. So, spinning isn't avoided completely. > > > > Unless, I'm missing something, the existing syc repl queue > > (SyncRepQueue) mechanism doesn't avoid spinning in the requestors > > (backends) SyncRepWaitForLSN or in the walsenders SyncRepWakeQueue. > > My point is that there are existing tools for alerting processes when an > LSN is synchronously replicated and for waking up WAL senders. What I am > proposing wouldn't involve spinning in XLogSendPhysical() waiting for > synchronous replication. Like SyncRepWaitForLSN(), we'd register our LSN > in the queue (SyncRepQueueInsert()), but we wouldn't sit in a separate loop > waiting to be woken. Instead, SyncRepWakeQueue() would eventually wake up > the WAL sender and trigger another iteration of WalSndLoop(). I understand. Even if we use the SyncRepWaitForLSN approach, the async walsenders will have to do nothing in WalSndLoop() until the sync walsender wakes them up via SyncRepWakeQueue. For sure, the SyncRepWaitForLSN approach avoids extra looping and makes the code look better. One concern is that increased burden on SyncRepLock the SyncRepWaitForLSN approach will need to take (LWLockAcquire(SyncRepLock, LW_EXCLUSIVE);), now that the async walsenders will get added to the list of backends that contened for SyncRepLock. Whereas the other approach that I earlier proposed would require SyncRepLock shared mode as it just needs to read the flushLSN. I'm not sure if it's a bigger problem. Having said above, I agree that the SyncRepWaitForLSN approach makes things probably easy and avoids the new wait loops. Let me think more and work on this approach. Regards, Bharath Rupireddy.
Hi, On 2022-03-05 14:14:54 +0530, Bharath Rupireddy wrote: > I understand. Even if we use the SyncRepWaitForLSN approach, the async > walsenders will have to do nothing in WalSndLoop() until the sync > walsender wakes them up via SyncRepWakeQueue. I still think we should flat out reject this approach. The proper way to implement this feature is to change the protocol so that WAL can be sent to replicas with an additional LSN informing them up to where WAL can be flushed. That way WAL is already sent when the sync replicas have acknowledged receipt and just an updated "flush/apply up to here" LSN has to be sent. - Andres
On Sun, Mar 6, 2022 at 1:57 AM Andres Freund <andres@anarazel.de> wrote: > > Hi, > > On 2022-03-05 14:14:54 +0530, Bharath Rupireddy wrote: > > I understand. Even if we use the SyncRepWaitForLSN approach, the async > > walsenders will have to do nothing in WalSndLoop() until the sync > > walsender wakes them up via SyncRepWakeQueue. > > I still think we should flat out reject this approach. The proper way to > implement this feature is to change the protocol so that WAL can be sent to > replicas with an additional LSN informing them up to where WAL can be > flushed. That way WAL is already sent when the sync replicas have acknowledged > receipt and just an updated "flush/apply up to here" LSN has to be sent. I was having this thought back of my mind. Please help me understand these: 1) How will the async standbys ignore the WAL received but not-yet-flushed by them in case the sync standbys don't acknowledge flush LSN back to the primary for whatever reasons? 2) When we say the async standbys will receive the WAL, will they just keep the received WAL in the shared memory but not apply or will they just write but not apply the WAL and flush the WAL to the pg_wal directory on the disk or will they write to some other temp wal directory until they receive go-ahead LSN from the primary? 3) Won't the network transfer cost be wasted in case the sync standbys don't acknowledge flush LSN back to the primary for whatever reasons? The proposed idea in this thread (async standbys waiting for flush LSN from sync standbys before sending the WAL), although it makes async standby slower in receiving the WAL, it doesn't have the above problems and is simpler to implement IMO. Since this feature is going to be optional with a GUC, users can enable it based on the needs. Regards, Bharath Rupireddy.
Hi,
On Sun, Mar 6, 2022 at 1:57 AM Andres Freund <andres@anarazel.de> wrote:Hi, On 2022-03-05 14:14:54 +0530, Bharath Rupireddy wrote:I understand. Even if we use the SyncRepWaitForLSN approach, the async walsenders will have to do nothing in WalSndLoop() until the sync walsender wakes them up via SyncRepWakeQueue.I still think we should flat out reject this approach. The proper way to implement this feature is to change the protocol so that WAL can be sent to replicas with an additional LSN informing them up to where WAL can be flushed. That way WAL is already sent when the sync replicas have acknowledged receipt and just an updated "flush/apply up to here" LSN has to be sent.I was having this thought back of my mind. Please help me understand these: 1) How will the async standbys ignore the WAL received but not-yet-flushed by them in case the sync standbys don't acknowledge flush LSN back to the primary for whatever reasons? 2) When we say the async standbys will receive the WAL, will they just keep the received WAL in the shared memory but not apply or will they just write but not apply the WAL and flush the WAL to the pg_wal directory on the disk or will they write to some other temp wal directory until they receive go-ahead LSN from the primary? 3) Won't the network transfer cost be wasted in case the sync standbys don't acknowledge flush LSN back to the primary for whatever reasons? The proposed idea in this thread (async standbys waiting for flush LSN from sync standbys before sending the WAL), although it makes async standby slower in receiving the WAL, it doesn't have the above problems and is simpler to implement IMO. Since this feature is going to be optional with a GUC, users can enable it based on the needs.
It also pushes the complexity to the client side for consumers who stream
Hi, On 2022-03-06 12:27:52 +0530, Bharath Rupireddy wrote: > On Sun, Mar 6, 2022 at 1:57 AM Andres Freund <andres@anarazel.de> wrote: > > > > Hi, > > > > On 2022-03-05 14:14:54 +0530, Bharath Rupireddy wrote: > > > I understand. Even if we use the SyncRepWaitForLSN approach, the async > > > walsenders will have to do nothing in WalSndLoop() until the sync > > > walsender wakes them up via SyncRepWakeQueue. > > > > I still think we should flat out reject this approach. The proper way to > > implement this feature is to change the protocol so that WAL can be sent to > > replicas with an additional LSN informing them up to where WAL can be > > flushed. That way WAL is already sent when the sync replicas have acknowledged > > receipt and just an updated "flush/apply up to here" LSN has to be sent. > > I was having this thought back of my mind. Please help me understand these: > 1) How will the async standbys ignore the WAL received but > not-yet-flushed by them in case the sync standbys don't acknowledge > flush LSN back to the primary for whatever reasons? What do you mean with "ignore"? When replaying? I think this'd require adding a new pg_control field saying up to which LSN WAL is "valid". If that field is set, replay would only replay up to that LSN unless some explicit operation is taken to replay further (e.g. for data recovery). > 2) When we say the async standbys will receive the WAL, will they just > keep the received WAL in the shared memory but not apply or will they > just write but not apply the WAL and flush the WAL to the pg_wal > directory on the disk or will they write to some other temp wal > directory until they receive go-ahead LSN from the primary? I was thinking that for now it'd go to disk, but eventually would first go to wal_buffers and only to disk if wal_buffers needs to be flushed out (and only in that case the pg_control field would need to be set). > 3) Won't the network transfer cost be wasted in case the sync standbys > don't acknowledge flush LSN back to the primary for whatever reasons? That should be *extremely* rare, and in that case a bit of wasted traffic isn't going to matter. > The proposed idea in this thread (async standbys waiting for flush LSN > from sync standbys before sending the WAL), although it makes async > standby slower in receiving the WAL, it doesn't have the above > problems and is simpler to implement IMO. Since this feature is going > to be optional with a GUC, users can enable it based on the needs. To me it's architecturally the completely wrong direction. We should move in the *other* direction, i.e. allow WAL to be sent to standbys before the primary has finished flushing it locally. Which requires similar infrastructure to what we're discussing here. Greetings, Andres Freund
On Tue, Mar 08, 2022 at 06:01:23PM -0800, Andres Freund wrote: > To me it's architecturally the completely wrong direction. We should move in > the *other* direction, i.e. allow WAL to be sent to standbys before the > primary has finished flushing it locally. Which requires similar > infrastructure to what we're discussing here. I think this is a good point. After all, WALRead() has the following comment: * XXX probably this should be improved to suck data directly from the * WAL buffers when possible. Once you have all the infrastructure for that, holding back WAL replay on async standbys based on synchronous replication might be relatively easy. -- Nathan Bossart Amazon Web Services: https://aws.amazon.com
At Sat, 12 Mar 2022 14:33:32 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in > On Tue, Mar 08, 2022 at 06:01:23PM -0800, Andres Freund wrote: > > To me it's architecturally the completely wrong direction. We should move in > > the *other* direction, i.e. allow WAL to be sent to standbys before the > > primary has finished flushing it locally. Which requires similar > > infrastructure to what we're discussing here. > > I think this is a good point. After all, WALRead() has the following > comment: > > * XXX probably this should be improved to suck data directly from the > * WAL buffers when possible. > > Once you have all the infrastructure for that, holding back WAL replay on > async standbys based on synchronous replication might be relatively easy. That is, (as my understanding) async standbys are required to allow overwriting existing unreplayed records after reconnection. But, putting aside how to remember that LSN, if that happens at a segment boundary, the async replica may run into the similar situation with the missing-contrecord case. But standby cannot insert any original record to get out from that situation. regards. -- Kyotaro Horiguchi NTT Open Source Software Center
At Mon, 14 Mar 2022 11:30:02 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in > At Sat, 12 Mar 2022 14:33:32 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in > > On Tue, Mar 08, 2022 at 06:01:23PM -0800, Andres Freund wrote: > > > To me it's architecturally the completely wrong direction. We should move in > > > the *other* direction, i.e. allow WAL to be sent to standbys before the > > > primary has finished flushing it locally. Which requires similar > > > infrastructure to what we're discussing here. > > > > I think this is a good point. After all, WALRead() has the following > > comment: > > > > * XXX probably this should be improved to suck data directly from the > > * WAL buffers when possible. > > > > Once you have all the infrastructure for that, holding back WAL replay on > > async standbys based on synchronous replication might be relatively easy. Just to make sure and a bit off from the topic, I think the optimization itself is quite promising and want to have. > That is, (as my understanding) async standbys are required to allow > overwriting existing unreplayed records after reconnection. But, > putting aside how to remember that LSN, if that happens at a segment > boundary, the async replica may run into the similar situation with > the missing-contrecord case. But standby cannot insert any original > record to get out from that situation. regards. -- Kyotaro Horiguchi NTT Open Source Software Center
Hi, On 2022-03-14 11:30:02 +0900, Kyotaro Horiguchi wrote: > That is, (as my understanding) async standbys are required to allow > overwriting existing unreplayed records after reconnection. But, > putting aside how to remember that LSN, if that happens at a segment > boundary, the async replica may run into the similar situation with > the missing-contrecord case. But standby cannot insert any original > record to get out from that situation. I do not see how that problem arrises on standbys when they aren't allowed to read those records. It'll just wait for more data to arrive. Greetings, Andres Freund
On Wed, Mar 9, 2022 at 7:31 AM Andres Freund <andres@anarazel.de> wrote: > > Hi, > > On 2022-03-06 12:27:52 +0530, Bharath Rupireddy wrote: > > On Sun, Mar 6, 2022 at 1:57 AM Andres Freund <andres@anarazel.de> wrote: > > > > > > Hi, > > > > > > On 2022-03-05 14:14:54 +0530, Bharath Rupireddy wrote: > > > > I understand. Even if we use the SyncRepWaitForLSN approach, the async > > > > walsenders will have to do nothing in WalSndLoop() until the sync > > > > walsender wakes them up via SyncRepWakeQueue. > > > > > > I still think we should flat out reject this approach. The proper way to > > > implement this feature is to change the protocol so that WAL can be sent to > > > replicas with an additional LSN informing them up to where WAL can be > > > flushed. That way WAL is already sent when the sync replicas have acknowledged > > > receipt and just an updated "flush/apply up to here" LSN has to be sent. > > > > I was having this thought back of my mind. Please help me understand these: > > 1) How will the async standbys ignore the WAL received but > > not-yet-flushed by them in case the sync standbys don't acknowledge > > flush LSN back to the primary for whatever reasons? > > What do you mean with "ignore"? When replaying? Let me illustrate with an example: 1) Say, primary at LSN 100, sync standby at LSN 90 (is about to receive/receiving the WAL from LSN 91 - 100 from primary), async standby at LSN 100 - today this is possible if the async standby is closer to primary than sync standby for whatever reasons 2) With the approach that's originally proposed in this thread - async standbys can never get ahead of LSN 90 (flush LSN reported back to the primary by all sync standbys) 3) With the approach that's suggested i.e. "let async standbys receive WAL at their own pace, but they should only be allowed to apply/write/flush to the WAL file in pg_wal directory/disk until the sync standbys latest flush LSN" - async standbys can receive the WAL from LSN 91 - 100 but they aren't allowed to apply/write/flush. Where will the async standbys hold the WAL from LSN 91 - 100 until the latest flush LSN (100) is reported to them? If they "somehow" store the WAL from LSN 91 - 100 and not apply/write/flush, how will they ignore that WAL, say if the sync standbys don't report the latest flush LSN back to the primary (for whatever reasons)? In such cases, the primary has no idea of the latest sync standbys flush LSN (?) if at all the sync standbys can't come up and reconnect and resync with the primary? Should the async standby always assume that the WAL from LSN 91 -100 is invalid for them as they haven't received the sync flush LSN from primary? In such a case, aren't there "invalid holes" in the WAL files on the async standbys? > I think this'd require adding a new pg_control field saying up to which LSN > WAL is "valid". If that field is set, replay would only replay up to that LSN > unless some explicit operation is taken to replay further (e.g. for data > recovery). With the approach that's suggested i.e. "let async standbys receive WAL at their own pace, but they should only be allowed to apply/write/flush to the WAL file in pg_wal directory/disk until the sync standbys latest flush LSN'' - there can be 2 parts to the WAL on async standbys - most of it "valid and makes sense for async standbys" and some of it "invalid and doesn't make sense for async standbys''? Can't this require us to rework some parts like "redo/apply/recovery logic on async standbys'', tools like pg_basebackup, pg_rewind, pg_receivewal, pg_recvlogical, cascading replication etc. that depend on WAL records and now should know whether the WAL records are valid for them? I may be wrong here though. > > 2) When we say the async standbys will receive the WAL, will they just > > keep the received WAL in the shared memory but not apply or will they > > just write but not apply the WAL and flush the WAL to the pg_wal > > directory on the disk or will they write to some other temp wal > > directory until they receive go-ahead LSN from the primary? > > I was thinking that for now it'd go to disk, but eventually would first go to > wal_buffers and only to disk if wal_buffers needs to be flushed out (and only > in that case the pg_control field would need to be set). IIUC, the WAL buffers (XLogCtl->pages) aren't used on standbys as wal receivers bypass them and flush the data directly to the disk. Hence, the WAL buffers that are allocated(?, I haven't checked the code though) but unused on standbys can be used to hold the WAL until the new flush LSN is reported from the primary. At any point of time, the WAL buffers will have the latest WAL that's waiting for a new flush LSN from the primary. However, this can be a problem for larger transactions that can eat up the entire WAL buffers and flush LSN is far behind in which case we need to flush the WAL to the latest WAL file in pg_wal/disk but let the other folks in the server know upto which the WAL is valid. > > 3) Won't the network transfer cost be wasted in case the sync standbys > > don't acknowledge flush LSN back to the primary for whatever reasons? > > That should be *extremely* rare, and in that case a bit of wasted traffic > isn't going to matter. Agree. > > The proposed idea in this thread (async standbys waiting for flush LSN > > from sync standbys before sending the WAL), although it makes async > > standby slower in receiving the WAL, it doesn't have the above > > problems and is simpler to implement IMO. Since this feature is going > > to be optional with a GUC, users can enable it based on the needs. > > To me it's architecturally the completely wrong direction. We should move in > the *other* direction, i.e. allow WAL to be sent to standbys before the > primary has finished flushing it locally. Which requires similar > infrastructure to what we're discussing here. Agree. * XXX probably this should be improved to suck data directly from the * WAL buffers when possible. Like others pointed out, if done above, it's possible to achieve "allow WAL to be sent to standbys before the primary has finished flushing it locally". I would like to hear more thoughts and then summarize the design points a bit later. Regards, Bharath Rupireddy.
On Sat, Mar 5, 2022 at 1:26 AM Nathan Bossart <nathandbossart@gmail.com> wrote: > > My point is that there are existing tools for alerting processes when an > LSN is synchronously replicated and for waking up WAL senders. What I am > proposing wouldn't involve spinning in XLogSendPhysical() waiting for > synchronous replication. Like SyncRepWaitForLSN(), we'd register our LSN > in the queue (SyncRepQueueInsert()), but we wouldn't sit in a separate loop > waiting to be woken. Instead, SyncRepWakeQueue() would eventually wake up > the WAL sender and trigger another iteration of WalSndLoop(). While we continue to discuss the other better design at [1], FWIW, I would like to share a simpler patch that lets wal senders serving async standbys wait until sync standbys report the flush lsn. Obviously this is not an elegant way to solve the problem reported in this thread, as I have this patch ready long back, I wanted to share it here. Nathan, of course, this is not something you wanted. [1] https://www.postgresql.org/message-id/CALj2ACWCj60g6TzYMbEO07ZhnBGbdCveCrD413udqbRM0O59RA%40mail.gmail.com Regards, Bharath Rupireddy.