[Joshua D. Drake - Sat at 08:14:51AM -0700]
> 1. Slony - Trigger based
I don't know ... I'm not overly happy with the idea of using triggers
for the replication. I'm a bit concerned about the overheads, both on
the primary database server as well as the administrative costs
setting it up and maintaining it. It really seems like slony is the
most recommended route for postgres replication now, so I hope my
concerns are unfounded.
> 2. Mammoth Replicator - Transaction Log based
Hm ... I don't really like the smell of proprietary binary code.
Anyway, I was really happy with the mysql update log ... it was such a
simple multi-purpose tool, it could be used for debugging, for rescue
after a power break or sysadm mistakes, simple pitr, and much more.
The idea of using transaction logs for replication is appealing; the
replication-logic is done outside the database itself and not
interphering with the database, it's done without significant resource
consumption on the master server (since the database anyway has to
make and write those transaction logs), etc.
> 3. PITR log shipping on timed intervals
We already do that (archive_command), but it's not exactly a hot
standby failover solution. Even if we automated everything, it would
still take quite long time to get the replica up and running. Also,
some transactions would be lost, even if constantly rsyncing the WAL
files.
I don't think any of the solutions above satisfies my immediate need
right now - stress testing. Right now we have a twin brother of our
database server standing idle. My boss wants some numbers on how much
more traffic our database server can handle, and I want to know what
effects I'll get from tuning postgres conf. The best way I can think of
right now for measuring this is to log all our transactions (including
RO transactions and rolled back transactions) for a while, and "play
them" on the replica server, on increased speed ...