On Sat, Mar 5, 2011 at 2:05 PM, Robert Haas
<robertmhaas@gmail.com> wrote:
On Sat, Mar 5, 2011 at 7:49 AM, Simon Riggs <
simon@2ndquadrant.com> wrote:
> If the order is arbitrary, why does it matter if it changes?
>
> The user has the power to specify a sequence, yet they have not done so.
> They are told the results are indeterminate, which is accurate. I can
> add the words "and may change as new standbys connect" if that helps.
I just don't think that's very useful behavior. Suppose I have a
master and two standbys. Both are local (or both are remote with
equally good connectivity). When one of the standbys goes down, there
will be a hiccup (i.e. transactions will block trying to commit) until
that guy falls off and the other one takes over. Now, when he comes
back up again, I don't want the synchronous standby to change again;
that seems like a recipe for another hiccup. I think "who the current
synchronous standby is" should act as a tiebreak.
+1
TLDR part:
The first one might be noticed by users because it takes tens of seconds before the sync switch. The second hiccup is hardly noticable. However limiting the # switches of sync standby to the absolute minimum is also good if e.g. (if there was a hook for it) cluster middleware is notified of the sync replica change. That might either introduce a race condition or be even completely unreliable if the notify is sent asynchronous, or it might introduce a longer lag if the master waits for confirmation of the sync replica change message. At that point sync replica changes become more expensive than they are currently.
regards,
Yeb Havinga