Re: Core team statement on replication in PostgreSQL - Mailing list pgsql-hackers

From Andreas 'ads' Scherbaum
Subject Re: Core team statement on replication in PostgreSQL
Date
Msg-id 20080602224047.6dc8a60b@iridium.wars-nicht.de
Whole thread Raw
In response to Re: Core team statement on replication in PostgreSQL  (Chris Browne <cbbrowne@acm.org>)
Responses Re: Core team statement on replication in PostgreSQL
List pgsql-hackers
On Mon, 02 Jun 2008 11:52:05 -0400 Chris Browne wrote:

> adsmail@wars-nicht.de ("Andreas 'ads' Scherbaum") writes:
> > On Thu, 29 May 2008 23:02:56 -0400 Andrew Dunstan wrote:
> >
> >> Well, yes, but you do know about archive_timeout, right? No need to wait 
> >> 2 hours.
> >
> > Then you ship 16 MB binary stuff every 30 second or every minute but
> > you only have some kbyte real data in the logfile. This must be taken
> > into account, especially if you ship the logfile over the internet
> > (means: no high-speed connection, maybe even pay-per-traffic) to the
> > slave.
> 
> If you have that kind of scenario, then you have painted yourself into
> a corner, and there isn't anything that can be done to extract you
> from it.

You are misunderstanding something. It's perfectly possible that you
have a low-traffic database with changes every now and then. But you
have to copy a full 16 MB logfile every 30 seconds or every minute just
to have the slave up-to-date.


> Consider: If you have so much update traffic that it is too much to
> replicate via WAL-copying, why should we expect that other mechanisms
> *wouldn't* also overflow the connection?

For some MB real data you copy several GB logfiles per day - that's a
lot overhead, isn't it?


> If you haven't got enough network bandwidth to use this feature, then
> nobody is requiring that you use it.  It seems like a perfectly
> reasonable prerequisite to say "this requires that you have enough
> bandwidth."

If you have a high-traffic database, then of course you need an other
connection as if you only have a low-traffic or a mostly read-only
database. But that's not the point. Copying an almost unused 16 MB WAL
logfile is just overhead - especially because the logfile is not
compressable very much because of all the leftovers from earlier use.


Kind regards

--             Andreas 'ads' Scherbaum
German PostgreSQL User Group


pgsql-hackers by date:

Previous
From: Shane Ambler
Date:
Subject: Re: Case-Insensitve Text Comparison
Next
From: Robert Treat
Date:
Subject: rfc: add pg_dump options to dump output