Re: Quorum commit for multiple synchronous replication. - Mailing list pgsql-hackers

From Masahiko Sawada
Subject Re: Quorum commit for multiple synchronous replication.
Date
Msg-id CAD21AoCdxaPqRj_qegqm-nb_ZZh=B+EVKqzkkbHg1CcD_44yng@mail.gmail.com
Whole thread Raw
In response to Re: Quorum commit for multiple synchronous replication.  (Michael Paquier <michael.paquier@gmail.com>)
Responses Re: [HACKERS] Quorum commit for multiple synchronous replication.  (Amit Kapila <amit.kapila16@gmail.com>)
List pgsql-hackers
On Thu, Dec 8, 2016 at 4:39 PM, Michael Paquier
<michael.paquier@gmail.com> wrote:
> On Thu, Dec 8, 2016 at 9:07 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>> You could do that, but first I would code up the simplest, cleanest
>> algorithm you can think of and see if it even shows up in a 'perf'
>> profile.  Microbenchmarking is probably overkill here unless a problem
>> is visible on macrobenchmarks.
>
> This is what I would go for! The current code is doing a simple thing:
> select the Nth element using qsort() after scanning each WAL sender's
> values. And I think that Sawada-san got it right. Even running on my
> laptop a pgbench run with 10 sync standbys using a data set that fits
> into memory, SyncRepGetOldestSyncRecPtr gets at most 0.04% of overhead
> using perf top on a non-assert, non-debug build. Hash tables and
> allocations get a far larger share. Using the patch,
> SyncRepGetSyncRecPtr is at the same level with a quorum set of 10
> nodes. Let's kick the ball for now. An extra patch could make things
> better later on if that's worth it.

Yeah, since the both K and N could be not large these algorithm takes
almost the same time. And current patch does simple thing. When we
need over 100 or 1000 replication node the optimization could be
required.
Attached latest v9 patch.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

Attachment

pgsql-hackers by date:

Previous
From: Amit Langote
Date:
Subject: Re: Declarative partitioning - another take
Next
From: Magnus Hagander
Date:
Subject: Re: Major service downtime expected