Thread: Slony v. DBMirror

Slony v. DBMirror

From
Peter Wilson
Date:
Vlads thread on Slony against PGcluster made me go back to take another
look at Slony. I'd tried to get it going back in February when I needed
to build some replicated databases. Slony was my first choice because it
seemed to be the current 'hot topic'.

I couldn't get it to work - and having tried another couple of solutions
I settled on DBMirror which comes with Postgres in the 'contrib' directory.

Looking at Slony now, can someone tell me what the benefits of Slony are
over DBmirror? As far as I can see:
+ both are async Master->multiple slaves
+ both (I think) can do cascaded replication
+ neither replicate large objects
+ both require all tables to have primary keys
+ neither replicate schema changes
+ nether do automatic switch-over

All slony seems to offer is a different configuration system and the
ability to automatically propogate configuration changes. It seems this
could be added to DBmirror pretty easily so why a whole new project?

Pete
--
http://www.whitebeam.org
http://www.yellowhawk.co.uk
-------

Re: Slony v. DBMirror

From
Jeff -
Date:
On May 5, 2005, at 10:35 AM, Peter Wilson wrote:


> I couldn't get it to work - and having tried another couple of
> solutions I settled on DBMirror which comes with Postgres in the
> 'contrib' directory.
>
>

I've had no issues setting up slony.  I've even had it running in a 6
node cluster with no issues.
Did you try asking any folks on the mailling list about your issue?
All I've you say here is it didn't like your pk.


> + nether do automatic switch-over
>
>

Slony has the ability to promote a slave to being a master and then
other slaves will switch over.  This functionality is not automatic
because a decision like that should be made by a human.  Who decides
if a machine is down?  It can become a very tricky problem.


> All slony seems to offer is a different configuration system and
> the ability to automatically propogate configuration changes. It
> seems this could be added to DBmirror pretty easily so why a whole
> new project?
>
>

One of the biggest things for Slony is that you can install slony,
set things up and it will bring the slave(s) "up to speed".  You
don't need to do an initial data dump (I think you still need to load
the schema on the slaves, but not the data).  That is a BIG win for
us folks who can't take a machine down while pg_dump runs (and while
it is restored on hte slave)


--
Jeff Trout <jeff@jefftrout.com>
http://www.jefftrout.com/
http://www.stuarthamm.net/






Re: Slony v. DBMirror

From
Grant McLean
Date:
On Thu, 2005-05-05 at 14:16 -0400, Jeff - wrote:
> One of the biggest things for Slony is that you can install slony,
> set things up and it will bring the slave(s) "up to speed".  You
> don't need to do an initial data dump (I think you still need to load
> the schema on the slaves, but not the data).  That is a BIG win for
> us folks who can't take a machine down while pg_dump runs (and while
> it is restored on hte slave)

Why would you need to take anything down to run pg_dump?  And surely
bringing a slave up to speed using Slony would be much slower than
dump/restore?


Re: Slony v. DBMirror

From
Peter Wilson
Date:
Grant McLean wrote:
> On Thu, 2005-05-05 at 14:16 -0400, Jeff - wrote:
>
>>One of the biggest things for Slony is that you can install slony,
>>set things up and it will bring the slave(s) "up to speed".  You
>>don't need to do an initial data dump (I think you still need to load
>>the schema on the slaves, but not the data).  That is a BIG win for
>>us folks who can't take a machine down while pg_dump runs (and while
>>it is restored on hte slave)
>
>
> Why would you need to take anything down to run pg_dump?  And surely
> bringing a slave up to speed using Slony would be much slower than
> dump/restore?
>
You don't need to take Postgres down to use pg_dump - it works just fine.

The problem with replication (with DBmirror at least) is that you have
to create a backup in a very specific order to make sure your new backup
ends up in sync and transactions are neither replicated more than once,
or not replicated at all:

1. Stop client access to the database (so you don't create any more
transactions to replicate)

2. Stop the replication script when the dbmirror tables are empty

3. pd_dump the master

4. pg_restore the slave

5. Restart client apps and replication (doesn't matter which order)

If you don't do this then there is a chance of missing or more likely
duplicating transactions which can obviously cause problems.

Having said that - it would be fairly straight-forward to write a
recover script that avoided these problems by taking note of the
transaction sequence IDs in the replication tables. If I get a chance
I'll look into doing that - doesn't feel like a huge job!

Pete


Re: Slony v. DBMirror

From
Jeff -
Date:
On May 5, 2005, at 5:03 PM, Grant McLean wrote:
>
> Why would you need to take anything down to run pg_dump?  And surely
> bringing a slave up to speed using Slony would be much slower than
> dump/restore?
>

You'd need to stop client access to PG to prevent changes from
occuring between when you take the dump & when you restore on the
slave and hten fire up dbmirror.  Although it might work if you
install the dbmirror triggers, then dump & restore.

Slony uses the COPY interface to read/load data.  This is the same
method used by pg_dump so the only throttle will be the network.

--
Jeff Trout <jeff@jefftrout.com>
http://www.jefftrout.com/
http://www.stuarthamm.net/





Re: Slony v. DBMirror

From
Brad Nicholson
Date:
Peter Wilson wrote:

> Grant McLean wrote:
>
>> On Thu, 2005-05-05 at 14:16 -0400, Jeff - wrote:
>>
>>> One of the biggest things for Slony is that you can install slony,
>>> set things up and it will bring the slave(s) "up to speed".  You
>>> don't need to do an initial data dump (I think you still need to
>>> load  the schema on the slaves, but not the data).  That is a BIG
>>> win for  us folks who can't take a machine down while pg_dump runs
>>> (and while  it is restored on hte slave)
>>
>>
>>
>> Why would you need to take anything down to run pg_dump?  And surely
>> bringing a slave up to speed using Slony would be much slower than
>> dump/restore?
>>
> You don't need to take Postgres down to use pg_dump - it works just fine.
>
> The problem with replication (with DBmirror at least) is that you have
> to create a backup in a very specific order to make sure your new
> backup ends up in sync and transactions are neither replicated more
> than once, or not replicated at all:

Not the case with Slony.  When you subscribe a new set, it it does a
copy of the data up to the the point in time when you've issued the
subscribe command.  While it's copying date, events to the node are
being logged.  Once the copy is completed, the events are applied, in
the proper order, to bring the set up to date.

--
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.


Re: Slony v. DBMirror

From
Andrew Sullivan
Date:
On Thu, May 05, 2005 at 03:35:27PM +0100, Peter Wilson wrote:
> Looking at Slony now, can someone tell me what the benefits of Slony are
> over DBmirror? As far as I can see:
> + both are async Master->multiple slaves
> + both (I think) can do cascaded replication

This isn't really true of dbmirror.  You can do it, but if you lose
the intermediate system, you also lose the cascaded ones (the
downstream one doesn't know about the state of the top-level origin).
Slony is designed to cover that case (it was one of several criteria
we had to satisfy).

> + neither replicate schema changes

This is sort of false, too.  Slony has a facility for injecting the
schema changes at just the right spot in the replication sets, so
that you really can push your schema changes through Slony.  (This
isn't to say you should regularly do this -- it's designed for
production systems, where schema changes should be relatively rare.)

> + nether do automatic switch-over

Any async replication system which routinely does automatic _fail_
over is, in my opinion, a product not suited for production.  This is
a decision that generally needs to be made at Layer 9 or so -- when
you kill a data source, you are potentially walking away from data.
(Naturally, some special cases will call for such fail over anyway.
It's possible to set up Slony for this, of course, just dangerous.
You'd need some external scripts, but they're not impossible to
build.  There's been a recent discussion of this topic on the slony
lists.)  Slony _does_ have pretty good facilities for controlled
switch over (as well as a good mechanism for fail over, where you
abandon the old origin).  With the right arrangements with pgpool,
you ought to be able to do a controlled switch over of a data origin
without a client outage.  This is part of the system by design.  That
design turns out to be harder than you'd think.

Slony also replicates sequences, has extensive guards against data
loss, and can deal with triggers that need to be working or not
depending on the current use of a table in a set.  The data loss
problem due to unreliable nodes is quite a bit nastier than it might
seem.  The problem isn't just to replicate.  The problem is to
replicate in a provably reliable way.

> All slony seems to offer is a different configuration system and the
> ability to automatically propogate configuration changes. It seems this
> could be added to DBmirror pretty easily so why a whole new project?

We looked at the options when we launched the project, believe me. At
the time, we were using erserver, the development of which we'd also
subsidized (starting in 2001).  We learned a lot from the troubles we
had with that system (some of which were addressed in what is the
current commercial erserver code), and the result was that we
concluded we could not "backport" several of the features we wanted
into either erserver or dbmirror (aside from the
frustrating-but-mostly-works original Java engine in the
first-released community erserver, there isn't much to distinguish
dbmirror and the community erserver).  If you want to see the results
of our investigation, and (by inference) the criteria we used to
decide what would satisfy our requirements, you can see Jan's concept
paper; it's at
<http://developer.postgresql.org/~wieck/slony1/Slony-I-concept.pdf>.

A

--
Andrew Sullivan  | ajs@crankycanuck.ca
This work was visionary and imaginative, and goes to show that visionary
and imaginative work need not end up well.
        --Dennis Ritchie

Re: Slony v. DBMirror

From
Andrew Sullivan
Date:
On Fri, May 06, 2005 at 09:01:58AM -0400, Jeff - wrote:
> slave and hten fire up dbmirror.  Although it might work if you
> install the dbmirror triggers, then dump & restore.

It's a little trickier than that, but yes, it might work.  Somewhere
in my mail archives, I have a very angry rant about this problem and
erserver (the current community version, approximately).  I looked at
dbmirror, and it took me no time to conclude it had the same issues.
It wouldn't be a problem for everyone.

> Slony uses the COPY interface to read/load data.  This is the same
> method used by pg_dump so the only throttle will be the network.

Not quite, because your schema needs to be complete on the target
system (in particular, you need your unique keys to stay, although
you can get rid of some other indexes to speed things up).

A

--
Andrew Sullivan  | ajs@crankycanuck.ca
The plural of anecdote is not data.
        --Roger Brinner

Re: Slony v. DBMirror

From
Peter Wilson
Date:
Andrew Sullivan wrote:
> On Thu, May 05, 2005 at 03:35:27PM +0100, Peter Wilson wrote:
>
>>Looking at Slony now, can someone tell me what the benefits of Slony are
>>over DBmirror? As far as I can see:
>>+ both are async Master->multiple slaves
>>+ both (I think) can do cascaded replication
>
>
> This isn't really true of dbmirror.  You can do it, but if you lose
> the intermediate system, you also lose the cascaded ones (the
> downstream one doesn't know about the state of the top-level origin).
> Slony is designed to cover that case (it was one of several criteria
> we had to satisfy).

That's true. The alternative is to simply replicate from master to both
slaves, but this places an additional load on the master.
>
>
>>+ neither replicate schema changes
>
>
> This is sort of false, too.  Slony has a facility for injecting the
> schema changes at just the right spot in the replication sets, so
> that you really can push your schema changes through Slony.  (This
> isn't to say you should regularly do this -- it's designed for
> production systems, where schema changes should be relatively rare.)
>
I got the lack of schema changes from the Slony documentation. I guess
there must be some manual intervention though to say 'make these schema
chagnes now'?

>
>>+ nether do automatic switch-over
>
>
> Any async replication system which routinely does automatic _fail_
> over is, in my opinion, a product not suited for production.  This is
> a decision that generally needs to be made at Layer 9 or so -- when
> you kill a data source, you are potentially walking away from data.

That wasn't meant to be a critisism of either :-)

> (Naturally, some special cases will call for such fail over anyway.
> It's possible to set up Slony for this, of course, just dangerous.
> You'd need some external scripts, but they're not impossible to
> build.  There's been a recent discussion of this topic on the slony
> lists.)  Slony _does_ have pretty good facilities for controlled
> switch over (as well as a good mechanism for fail over, where you
> abandon the old origin).  With the right arrangements with pgpool,
> you ought to be able to do a controlled switch over of a data origin
> without a client outage.  This is part of the system by design.  That
> design turns out to be harder than you'd think.
>
> Slony also replicates sequences, has extensive guards against data
> loss, and can deal with triggers that need to be working or not
> depending on the current use of a table in a set.  The data loss
> problem due to unreliable nodes is quite a bit nastier than it might
> seem.  The problem isn't just to replicate.  The problem is to
> replicate in a provably reliable way.

FYI DBmirror with Postgres version 8 also replicates sequences.

>
>
>>All slony seems to offer is a different configuration system and the
>>ability to automatically propogate configuration changes. It seems this
>>could be added to DBmirror pretty easily so why a whole new project?
>
>
> We looked at the options when we launched the project, believe me. At
> the time, we were using erserver, the development of which we'd also
> subsidized (starting in 2001).  We learned a lot from the troubles we
> had with that system (some of which were addressed in what is the
> current commercial erserver code), and the result was that we
> concluded we could not "backport" several of the features we wanted
> into either erserver or dbmirror (aside from the
> frustrating-but-mostly-works original Java engine in the
> first-released community erserver, there isn't much to distinguish
> dbmirror and the community erserver).  If you want to see the results
> of our investigation, and (by inference) the criteria we used to
> decide what would satisfy our requirements, you can see Jan's concept
> paper; it's at
> <http://developer.postgresql.org/~wieck/slony1/Slony-I-concept.pdf>.
>
> A
>

Fair enough - thanks Andrew. When I get some less busy time I'll have
another got with Slony - you've made a good case for it, even though I
probably don't need it for my applications right now (not running a
domanin name registry!). It would probably ease my admin overhead when
things do go wrong though which is good.

thanks
Pete

Re: Slony v. DBMirror

From
Chris Browne
Date:
threshar@torgo.978.org (Jeff -) writes:
> On May 5, 2005, at 5:03 PM, Grant McLean wrote:
>>
>> Why would you need to take anything down to run pg_dump?  And surely
>> bringing a slave up to speed using Slony would be much slower than
>> dump/restore?
>>
>
> You'd need to stop client access to PG to prevent changes from
> occuring between when you take the dump & when you restore on the
> slave and hten fire up dbmirror.  Although it might work if you
> install the dbmirror triggers, then dump & restore.
>
> Slony uses the COPY interface to read/load data.  This is the same
> method used by pg_dump so the only throttle will be the network.

There unfortunately is another throttle, at this point.

If you use pg_dump to copy a database from here to there, the
processing takes place thus:

  drop index i_a on a;
  drop index ii_a on a;
  drop table a;

  create table a ( stuff );

  copy a from stdin;
...
\.

  create index i_a on a (id);
  create index ii_a on a (txn_date);

In that arrangement, all the data is copied, then the indexes are
generated.

The Slony-I arrangement presently rearranges it thus:

  drop index i_a on a;
  drop index ii_a on a;
  drop table a;

  create table a ( stuff );

  create index i_a on a (id);
  create index ii_a on a (txn_date);

  copy a from stdin;
...
\.

The indices are constructed concurrently with loading the data, which
isn't nearly as fast as creating the indices afterwards.

Once Slony-I 1.1 is out, I want to start looking at how to regenerate
the indexes rather than "building into them," so as to use the
following approach:

  -- Start with schema complete with indexes

  drop index i_a on a;
  drop index ii_a on a;

  copy a from stdin;
...
\.

  create index i_a on a (id);
  create index ii_a on a (txn_date);

That would indeed improve performance at set creation time.

What I need, for that, is a way of grabbing all the index definitions
for the table.  One way to do that would be to run "pg_dump -s -t a",
though I'd rather have a method that uses the connection I already
have to the database.

This may involve some more-or-less involved queries on pg_index,
unless the pg_indexes view is available on all versions of PG of
interest...
--
(format nil "~S@~S" "cbbrowne" "acm.org")
http://www.ntlug.org/~cbbrowne/sap.html
Rules of the Evil Overlord #78.  "I will not tell my Legions of Terror
"And he must  be taken alive!" The command will be:  ``And try to take
him alive if it is reasonably practical.''"
<http://www.eviloverlord.com/>

Re: Slony v. DBMirror

From
Brad Nicholson
Date:
Peter Wilson wrote:

> Andrew Sullivan wrote:
>
>> On Thu, May 05, 2005 at 03:35:27PM +0100, Peter Wilson wrote:
>>
>>> Looking at Slony now, can someone tell me what the benefits of Slony
>>> are over DBmirror? As far as I can see:
>>> + both are async Master->multiple slaves
>>> + both (I think) can do cascaded replication
>>
>>
>>
>> This isn't really true of dbmirror.  You can do it, but if you lose
>> the intermediate system, you also lose the cascaded ones (the
>> downstream one doesn't know about the state of the top-level origin).
>> Slony is designed to cover that case (it was one of several criteria
>> we had to satisfy).
>
>
> That's true. The alternative is to simply replicate from master to
> both slaves, but this places an additional load on the master.
>
>>
>>
>>> + neither replicate schema changes
>>
>>
>>
>> This is sort of false, too.  Slony has a facility for injecting the
>> schema changes at just the right spot in the replication sets, so
>> that you really can push your schema changes through Slony.  (This
>> isn't to say you should regularly do this -- it's designed for
>> production systems, where schema changes should be relatively rare.)
>>
> I got the lack of schema changes from the Slony documentation. I guess
> there must be some manual intervention though to say 'make these
> schema chagnes now'?

This is what you'd want to look at for schema changes:
http://gborg.postgresql.org/project/slony1/genpage.php?slonik_commands#stmt_ddl_script

You want to be absolutly, 100% sure that the schema changes are going to
work on all nodes before firing them through this, because you only get
one shot.  The accepted best practice for doing this is to put the
schema changes in a transacation block that rolls back, and run them via
psql against all the nodes, and make sure nothing breaks.

--
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.


Re: Slony v. DBMirror

From
"Jim C. Nasby"
Date:
On Fri, May 06, 2005 at 12:09:14PM -0400, Chris Browne wrote:
> What I need, for that, is a way of grabbing all the index definitions
> for the table.  One way to do that would be to run "pg_dump -s -t a",
> though I'd rather have a method that uses the connection I already
> have to the database.
>
> This may involve some more-or-less involved queries on pg_index,
> unless the pg_indexes view is available on all versions of PG of
> interest...

You'll want to take a look at newsysviews on pgFoundry. Even if you
decide not to depend on it (though hopefully it'll get built into 8.1)
the index views should get you pretty close to what you need for
generating index definition statements. If there's anything missing let
me know.
--
Jim C. Nasby, Database Consultant               decibel@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"

Re: Slony v. DBMirror

From
Andrew Sullivan
Date:
On Fri, May 06, 2005 at 05:42:38PM +0100, Peter Wilson wrote:
> I got the lack of schema changes from the Slony documentation.

This seems odd.  I see Brad already told you what to look for; but
putting schema changes in through slonik was _always_ part of the
design.  What's always been true (and what will forever remain so, by
design) is that you can't simply issue ALTER TABLE "n" commands on the
origin for table "n".

> FYI DBmirror with Postgres version 8 also replicates sequences.

Oh, this is interesting.  Glad to hear it.  Can it guarantee that a
sequence won't "roll back" in some really bad case, if you fail over
to the target?  Figuring out how to do that was one of Jan's homework
projects, IIRC. ;-)

A

--
Andrew Sullivan  | ajs@crankycanuck.ca
I remember when computers were frustrating because they *did* exactly what
you told them to.  That actually seems sort of quaint now.
        --J.D. Baldwin