Thread: Any big slony and WAL shipping users?
Hi,
We are trying to use slony and WAL shipping for warm standby for replication with postgresql. Currently our systems are in oracle and we r checking the feasibility to migrate to postgres. Replication is one major issue here. Though everything seems to be working fine in our test environment, we just want the assurance that Slony and WAL shipping is used by other large production systems and running successfully.
What are the other large 24x7 productions systems that use slony and the other WAL archiving of postgresql successfully?
Thanks
josh
We are trying to use slony and WAL shipping for warm standby for replication with postgresql. Currently our systems are in oracle and we r checking the feasibility to migrate to postgres. Replication is one major issue here. Though everything seems to be working fine in our test environment, we just want the assurance that Slony and WAL shipping is used by other large production systems and running successfully.
What are the other large 24x7 productions systems that use slony and the other WAL archiving of postgresql successfully?
Thanks
josh
On Thu, Dec 27, 2007 at 10:49:10AM -0500, Josh Harrison wrote: > Hi, > We are trying to use slony and WAL shipping for warm standby for replication It's unusual to use both. Any reason you want to? Anyway. . . > What are the other large 24x7 productions systems that use slony and the > other WAL archiving of postgresql successfully? Slony was originally written by Jan Wieck, and released by Afilias (disclosure: currently my employer). Afilias wrote it because we needed better replication, and there were no community-offered systems. We run the registries for several Internet top-level domains, including .info and .org. We have fairly stringent uptime guarantees, and reasonably high transaction volumes. The databases are not immense, however. Nevertheless, all the DNS changes for .org (for instance) today are dependent on Slony operating correctly. I wouldn't say the system is perfect, but I think I can safely say we've been quite happy with its flexibility. User-space tools are a little, uh, geeky still (with the possible exception of the GUI support -- our system deployment makes that a little hard for us to use). There is someone who is using Slony to operate some rather large databases; he can post here if he wants to share his experience with you. A
On Dec 27, 2007 12:37 PM, Andrew Sullivan <ajs@crankycanuck.ca> wrote:
We wanted to have 1 master and 1 slave that can be queried and 1 warm standby server that can be brought up in case of crash. So I thought it might be better to have WAL shipping for warm standby since thats working pretty good and Slony for master-slave replication.
Let me know your comments on this setup? What is better for this setup?
That will be very useful since our people are debating on the reliability of Slony for large databases!!!
Thanks
Josh
On Thu, Dec 27, 2007 at 10:49:10AM -0500, Josh Harrison wrote:It's unusual to use both. Any reason you want to?
> Hi,
> We are trying to use slony and WAL shipping for warm standby for replication
We wanted to have 1 master and 1 slave that can be queried and 1 warm standby server that can be brought up in case of crash. So I thought it might be better to have WAL shipping for warm standby since thats working pretty good and Slony for master-slave replication.
Let me know your comments on this setup? What is better for this setup?
Anyway. . .Slony was originally written by Jan Wieck, and released by Afilias
> What are the other large 24x7 productions systems that use slony and the
> other WAL archiving of postgresql successfully?
(disclosure: currently my employer). Afilias wrote it because we needed
better replication, and there were no community-offered systems. We run the
registries for several Internet top-level domains, including .info and .org.
We have fairly stringent uptime guarantees, and reasonably high transaction
volumes. The databases are not immense, however. Nevertheless, all the DNS
changes for .org (for instance) today are dependent on Slony operating
correctly. I wouldn't say the system is perfect, but I think I can safely
say we've been quite happy with its flexibility. User-space tools are a
little, uh, geeky still (with the possible exception of the GUI support --
our system deployment makes that a little hard for us to use).
There is someone who is using Slony to operate some rather large databases;
he can post here if he wants to share his experience with you.
That will be very useful since our people are debating on the reliability of Slony for large databases!!!
Thanks
Josh
Note: I've removed -general, since this is really just a Slony discussion. On Thu, Dec 27, 2007 at 12:49:55PM -0500, Josh Harrison wrote: > We wanted to have 1 master and 1 slave that can be queried and 1 warm > standby server that can be brought up in case of crash. So I thought it > might be better to have WAL shipping for warm standby since thats working > pretty good and Slony for master-slave replication. > Let me know your comments on this setup? What is better for this setup? The disadvantage of that approach is that you have two different systems you have to maintain. You could maintain more than one Slony replica, which would have the advantage of flexibility. For instance, you could have two replicas, and in the event your origin failed, you just move the origin to one of the replicas. If one of your replicas failed, however, you could use the other replica for whatever thing the failed-replica was doing (so your "query-only" system could also be your standby while you repaired the other standby). In general, I think the best arrangement is the least complicated one. Two different replication strategies in the same mix seems to me to be the sort of complication that will make emergency recovery harder. OTOH, two different strategies presumably protects you from bugs in the other code. (For instance, a DNS company runs completely different name server code on completely different hardware and OS platforms in order to make sure not to be vulnerable to day-0 exploits. That kind of thing.) > That will be very useful since our people are debating on the reliability of > Slony for large databases!!! The size of the database is not the determining factor in Slony reliability. What is the change rate on your systems? That's the big factor in any replication system, really. Afilias's systems are quite active -- there's a _lot_ of churn on top level domains these days. A
On Thu, Dec 27, 2007 at 01:08:50PM -0500, Andrew Sullivan wrote: > Note: I've removed -general, since this is really just a Slony discussion. err, except I didn't. Apologies for the noise, all. A
On Thu, 2007-12-27 at 12:49 -0500, Josh Harrison wrote: > > > On Dec 27, 2007 12:37 PM, Andrew Sullivan <ajs@crankycanuck.ca> wrote: > On Thu, Dec 27, 2007 at 10:49:10AM -0500, Josh Harrison wrote: > > Hi, > > We are trying to use slony and WAL shipping for warm standby > for replication > > > It's unusual to use both. Any reason you want to? > > We wanted to have 1 master and 1 slave that can be queried and 1 warm > standby server that can be brought up in case of crash. So I thought > it might be better to have WAL shipping for warm standby since thats > working pretty good and Slony for master-slave replication. > Let me know your comments on this setup? What is better for this > setup? That's a setup I've recommended in the past. It's like using RAC and Data Guard together, which is also common. -- Simon Riggs 2ndQuadrant http://www.2ndQuadrant.com
* Andrew Sullivan: > (For instance, a DNS company runs completely different name server code on > completely different hardware and OS platforms in order to make sure not to > be vulnerable to day-0 exploits. That kind of thing.) This only helps against crasher bugs. For code injection, it's devastating if the attacker can compromise one node, and by diversifying, he or she can choose which code base to attack. I guess that in the database case, it's mostly the same, with crash bugs on the one side (where diversification helps), and creeping data corruption bugs on the other (where it might increase risk). If you use multiple systems with a comparator, things are different, of course. -- Florian Weimer <fweimer@bfk.de> BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstraße 100 tel: +49-721-96201-1 D-76133 Karlsruhe fax: +49-721-96201-99
Hi Josh, This is exactly the same setup I'm currently testing. For those asking why use both WAL shipping and slony, it's simple; this means we have no single point of failure. If slony stops replicating because we mess up a replication set or our shipping method (NFS) falls on its ass, at least we still have some replication going. We have to use slony, like you we need a replica to take the load off of our main system, this is mainly for our reporting processes. Glyn --- Josh Harrison <joshques@gmail.com> wrote: > Hi, > We are trying to use slony and WAL shipping for warm standby for > replication > with postgresql. Currently our systems are in oracle and we r > checking the > feasibility to migrate to postgres. Replication is one major issue > here. > Though everything seems to be working fine in our test environment, > we just > want the assurance that Slony and WAL shipping is used by other > large > production systems and running successfully. > What are the other large 24x7 productions systems that use slony > and the > other WAL archiving of postgresql successfully? > > Thanks > josh > ___________________________________________________________ Yahoo! Answers - Got a question? Someone out there knows the answer. Try it now. http://uk.answers.yahoo.com/
On Fri, Dec 28, 2007 at 11:21:53AM +0100, Florian Weimer wrote: > This only helps against crasher bugs. For code injection, it's > devastating if the attacker can compromise one node, and by > diversifying, he or she can choose which code base to attack. Well, it also helps in your robustness plan: if you find out about an exploit before you've been exploited, you can turn off the exploitable systems and still not lose service. But otherwise, yes, what you say is true. Real 100% uptime is hard, no matter how you go at it. A
Hi,
Thanks for all your response.
I also thought abt having 2 setups for backup and replication so that even when slony fails I will always have the standby server (WAL shipping) to help me out.
I have another question regarding this. I also want to write these to the tape. Right now we have a cron job doing level 0,1,2,... backups of the other servers to the tape regularly. What is the good way to include postgres server backup to tape?
Thanks
josh
Thanks for all your response.
I also thought abt having 2 setups for backup and replication so that even when slony fails I will always have the standby server (WAL shipping) to help me out.
I have another question regarding this. I also want to write these to the tape. Right now we have a cron job doing level 0,1,2,... backups of the other servers to the tape regularly. What is the good way to include postgres server backup to tape?
Thanks
josh
On Dec 28, 2007 9:29 AM, Glyn Astill <glynastill@yahoo.co.uk> wrote:
Hi Josh,
This is exactly the same setup I'm currently testing. For those
asking why use both WAL shipping and slony, it's simple; this means
we have no single point of failure. If slony stops replicating
because we mess up a replication set or our shipping method (NFS)
falls on its ass, at least we still have some replication going.
We have to use slony, like you we need a replica to take the load off
of our main system, this is mainly for our reporting processes.
Glyn
--- Josh Harrison <joshques@gmail.com> wrote:
> Hi,
> We are trying to use slony and WAL shipping for warm standby for
> replication
> with postgresql. Currently our systems are in oracle and we r
> checking the
> feasibility to migrate to postgres. Replication is one major issue
> here.
> Though everything seems to be working fine in our test environment,
> we just
> want the assurance that Slony and WAL shipping is used by other
> large
> production systems and running successfully.
> What are the other large 24x7 productions systems that use slony
> and the
> other WAL archiving of postgresql successfully?
>
> Thanks
> josh
>___________________________________________________________
Yahoo! Answers - Got a question? Someone out there knows the answer. Try it
now.
http://uk.answers.yahoo.com/
On Fri, Dec 28, 2007 at 12:06:42PM -0500, Josh Harrison wrote: > I also thought abt having 2 setups for backup and replication so that even > when slony fails I will always have the standby server (WAL shipping) to > help me out. Ok, but do realise that what this form of redundancy provides you with is two independent paths for different kinds of failure. It may also mean you have more complicated troubleshooting procedures. I'm not advising not to do it; I'm rather advising you to think carefully about what problems you think you're solving, and what potential problems you are adding by taking this approach. > I have another question regarding this. I also want to write these to the > tape. Right now we have a cron job doing level 0,1,2,... backups of the > other servers to the tape regularly. What is the good way to include > postgres server backup to tape? If you're going to use WAL shipping anyway, then I'd do fairly regular full backups plus WAL archiving. This is outlined completely in the manual in section 23.3. A