Re: Synchronous Log Shipping Replication - Mailing list pgsql-hackers

From ITAGAKI Takahiro
Subject Re: Synchronous Log Shipping Replication
Date
Msg-id 20080908184432.9845.52131E4D@oss.ntt.co.jp
Whole thread Raw
In response to Re: Synchronous Log Shipping Replication  (Bruce Momjian <bruce@momjian.us>)
Responses Re: Synchronous Log Shipping Replication  (Markus Wanner <markus@bluegap.ch>)
Re: Synchronous Log Shipping Replication  (Simon Riggs <simon@2ndQuadrant.com>)
List pgsql-hackers
Bruce Momjian <bruce@momjian.us> wrote:

> > > b) Use new background process as WALSender
> > > 
> > >    This idea needs background-process hook which enables users
> > >    to define new background processes

> I think starting/stopping a process for each WAL send is too much
> overhead.

Yes, of course slow. But I guess it is the only way to share one socket
in all backends. Postgres is not a multi-threaded architecture,
so each backend should use dedicated connections to send WAL buffers.
300 backends require 300 connections for each slave... it's not good at all.

> It sounds like Fujii-san is basically saying they can only get the hooks
> done for 8.4, not the actual solution.

No! He has an actual solution in his prototype ;-)
It is very similar to b) and the overhead was not so bad.
It's not so clean to be a part of postgres, though.

Are there any better idea to share one socket connection between
backends (and bgwriter)? The connections could be established after
fork() from postmaster, and number of them could be two or more.
This is one of the most complicated part of synchronous log shipping.
Switching-processes apporach like b) is just one idea for it.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center




pgsql-hackers by date:

Previous
From: ITAGAKI Takahiro
Date:
Subject: Re: NDirectFileRead and Write
Next
From: Markus Wanner
Date:
Subject: Re: Synchronous Log Shipping Replication