[SPAM] Re: Best way to replicate to large number of nodes - Mailing list pgsql-general

From Ben Chobot
Subject [SPAM] Re: Best way to replicate to large number of nodes
Date
Msg-id 8B36C1FC-1BCE-4A7A-9FAC-C64EC6B1416A@silentmedia.com
Whole thread Raw
In response to Best way to replicate to large number of nodes  (Brian Peschel <brianp@occinc.com>)
Responses Re: [SPAM] Re: Best way to replicate to large number of nodes  (Brian Peschel <brianp@occinc.com>)
List pgsql-general
On Apr 21, 2010, at 1:41 PM, Brian Peschel wrote:

> I have a replication problem I am hoping someone has come across before and can provide a few ideas.
>
> I am looking at a configuration of on 'writable' node and anywhere from 10 to 300 'read-only' nodes.  Almost all of
thesenodes will be across a WAN from the writable node (some over slow VPN links too).  I am looking for a way to
replicateas quickly as possible from the writable node to all the read-only nodes.  I can pretty much guarantee the
read-onlynodes will never become master nodes.  Also, the updates to the writable node are bunched and at known times
(ieonly updated when I want it updated, not constant updates), but when changes occur, there are a lot of them at once. 

Two things you didn't address are the acceptable latency of keeping the read-only nodes in sync with the master - can
theybe different for a day? A minute? Do you need things to stay synchronous? Also, how big is your dataset? A simple
pg_dumpand some hot scp action after you batched updates might be able to solve your problem. 

pgsql-general by date:

Previous
From: Jaime Casanova
Date:
Subject: Re: How to read the execution Plan
Next
From: Scott Marlowe
Date:
Subject: Re: How to read the execution Plan