Thread: Slony1 or DRBD for replication ?
Hello, I want to replicate my PostgreSQL database at an other location. The distance between the two locations should be around 10 miles. The link should be a fast ethernet dedicated link. What would you suggest me to do? DRBD or slony1 for PostgreSQL replication? Thank you.
Hello,
I want to replicate my PostgreSQL database at an other location. The
distance between the two locations should be around 10 miles. The link
should be a fast ethernet dedicated link.
What would you suggest me to do? DRBD or slony1 for PostgreSQL replication?
Thank you.
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
On Fri, 2006-04-14 at 14:56 +0200, Pierre LEBRECH wrote: > Hello, > > I want to replicate my PostgreSQL database at an other location. The > distance between the two locations should be around 10 miles. The link > should be a fast ethernet dedicated link. > > What would you suggest me to do? DRBD or slony1 for PostgreSQL replication? It depends on your needs. If you want to be able to use the slave postgresql instance (reporting, non replicated name spaces, materialized views etc...) Slony or Mammoth Replicator. If you want to also replicate users/groups, grant and revoke, Mammoth Replicator. If you just want a hot backup... DRBD. Joshua D. Drake > > Thank you. > > ---------------------------(end of broadcast)--------------------------- > TIP 9: In versions below 8.0, the planner will ignore your desire to > choose an index scan if your joining column's datatypes do not > match > -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/
Joshua D. Drake wrote: > On Fri, 2006-04-14 at 14:56 +0200, Pierre LEBRECH wrote: > >>Hello, >> >>I want to replicate my PostgreSQL database at an other location. The >>distance between the two locations should be around 10 miles. The link >>should be a fast ethernet dedicated link. >> >>What would you suggest me to do? DRBD or slony1 for PostgreSQL replication? > > > It depends on your needs. > > If you want to be able to use the slave postgresql instance (reporting, > non replicated name spaces, materialized views etc...) Slony or Mammoth > Replicator. > > If you want to also replicate users/groups, grant and revoke, Mammoth > Replicator. > > If you just want a hot backup... DRBD. > > Joshua D. Drake > The second location should be used in case of emergency. So, if my first machine/system becomes unreachable for whatever reason, I want to be able to switch very quickly to the other machine. Of course, the goal is to have no loss of data. That is the context. Furthermore, I have experience with DRBD (not on databases) and I do not know if DRBD would be the best way to solve this replication problem. Thanks for any suggestions and explanations. PS : my database is actualy in production in a critical environment > > >>Thank you. >> >>---------------------------(end of broadcast)--------------------------- >>TIP 9: In versions below 8.0, the planner will ignore your desire to >> choose an index scan if your joining column's datatypes do not >> match >>
In the last exciting episode, pierre.lebrech@laposte.net (Pierre LEBRECH) wrote: > Thanks for any suggestions and explanations. A third possibility would be PITR, new in version 8, if the point is to have recovery from big failure. You'd periodically copy the whole DB, and continually copy WAL files across the wire... See the PG docs; there's a whole chapter on it... -- output = ("cbbrowne" "@" "gmail.com") http://linuxdatabases.info/info/spreadsheets.html "It can be shown that for any nutty theory, beyond-the-fringe political view or strange religion there exists a proponent on the Net. The proof is left as an exercise for your kill-file." -- Bertil Jonell
On Fri, Apr 14, 2006 at 07:42:29PM +0200, Pierre LEBRECH wrote: > The second location should be used in case of emergency. So, if my first > machine/system becomes unreachable for whatever reason, I want to be > able to switch very quickly to the other machine. Of course, the goal is > to have no loss of data. That is the context. > > Furthermore, I have experience with DRBD (not on databases) and I do not > know if DRBD would be the best way to solve this replication problem. > > Thanks for any suggestions and explanations. > > PS : my database is actualy in production in a critical environment I believe that Continuent currently has the only no-loss (ie: syncronous) replication solution. DRBD might allow for this as well, if it can be setup to not return from fsync until the data's been replicated. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
I am running PostgreSQL 8.1.3 on Windows.
The project itself is a real time data acquisition system, so it cannot be taken off-line for backups.
I have tried using pg_dump, but discovered that the backup was not a consistent backup. The application currently inserts about 1 million rows per day (will ramp to about 5 million when in full production). All of the insertion of the data is controlledby a master stored procedure which inserts rows into a raw log, and dynamically aggregates data into anclliary tables which enable us to see statistical data of the systems being monitored withut having to mine the raw data.
The original prototype of this system was running onder MS SQL Server 2000, but once PostgreSQL 8.1 was released I decided to port it. The biggest challenge which I have right now is to ensure that we can have data recovery in case of a catastrophic failure in the primary system - with the ability to load a "cold spare".
Back to the problem I faced when testing backups with pg_dump, it appears that the backup was not a consistent backup of the data. For example, sequences which are used by some tables bo longer held the correct values (the tables now held higher values), and ths would indicate to me that the backup of an aggregate table may not match the underlying raw data which created it.
As such, my only option is to create a "hot backup" using PITR. I would like to know if the following scenario would work:
A secondary server with the same version of PostgreSQL is loaded on a secondary server. The PostgreSQL service on the second box would not be running. I would issue a pg_start_backup. I would then copy the the database directory to the second box. Issue a pg_stop_backup. I would delete the WAL logs form the secondary box's pg_xlog. I wuld then copy the archived WAL's as well as the current WAL to the secondary pg_xlog location.
In could then backup the snapshot from the secondatry box to a lesser media for archival purposes, and in the event of a problem, I would simply start the service on he secondary box.
Is this a workable solution? Or, better yet, could th secondary be ive and, after the initial backup and restore from the main box, could replication be acomplished by somehow moving the new archived logs to the secondary box, thereby creating a timed replication (forexample, every hour we cold create anothe backup ad just move the WAL file oer, since the state of the secondary database shoud reflect the state of the previous bacukp)?
While I absolutely love PotgreSQL, and together with some of the add-ons (pgAdmin, pgAgent, the add-ons from EMS) there is alost nothing missing, the relative difficulty of backing up / restoring vis-a-vis the commercial solutions is frustrating. Not hat it is a PostgreQL problem, but rather a learning curve, but until I get this working atisfactorily I am a bit worried.
As always, any insight and assistance ill be deeply appreciated.
Regards,
Benjamin
"Benjamin Krajmalnik" <kraj@illumen.com> writes: > I have tried using pg_dump, but discovered that the backup was not a = > consistent backup. Really? > Back to the problem I faced when testing backups with pg_dump, it = > appears that the backup was not a consistent backup of the data. For = > example, sequences which are used by some tables bo longer held the = > correct values (the tables now held higher values), Sequences are non-transactional, so pg_dump might well capture a higher value of the sequence counter than is reflected in any table row, but there are numerous other ways by which a gap can appear in the set of sequence values. That's not a bug. If you've got real discrepancies in pg_dump's output, a lot of us would like to know about 'em. regards, tom lane
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Thu 4/20/2006 9:09 PM
To: Benjamin Krajmalnik
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Howto: Using PITR recovery for standby replication
"Benjamin Krajmalnik" <kraj@illumen.com> writes:
> I have tried using pg_dump, but discovered that the backup was not a =
> consistent backup.
Really?
> Back to the problem I faced when testing backups with pg_dump, it =
> appears that the backup was not a consistent backup of the data. For =
> example, sequences which are used by some tables bo longer held the =
> correct values (the tables now held higher values),
Sequences are non-transactional, so pg_dump might well capture a higher
value of the sequence counter than is reflected in any table row, but
there are numerous other ways by which a gap can appear in the set of
sequence values. That's not a bug. If you've got real discrepancies
in pg_dump's output, a lot of us would like to know about 'em.
regards, tom lane
Benjamin Krajmalnik wrote: > The particular table which was problematic (and for which I posted > another message due to the unique constraint violation which I am > seeing intermittently) is the one with the high insertion rate. The > sequence is currently being used to facilitate purginf of old records. How are you creating the dumps of the sequence and the table? If you do both separately (as in two pg_dump invocations with a -t switch each), that could explain your problem. This shouldn't really happen however, because the sequence dump should be emitted in a dump of the table, if the field is really of SERIAL or BIGSERIAL type. However I don't see any other way which would make the sequence go out of sync. -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support
From: Alvaro Herrera [mailto:alvherre@commandprompt.com]
Sent: Thu 4/20/2006 10:02 PM
To: Benjamin Krajmalnik
Cc: Tom Lane; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Howto: Using PITR recovery for standby replication
Benjamin Krajmalnik wrote:
> The particular table which was problematic (and for which I posted
> another message due to the unique constraint violation which I am
> seeing intermittently) is the one with the high insertion rate. The
> sequence is currently being used to facilitate purginf of old records.
How are you creating the dumps of the sequence and the table? If you do
both separately (as in two pg_dump invocations with a -t switch each),
that could explain your problem. This shouldn't really happen however,
because the sequence dump should be emitted in a dump of the table, if
the field is really of SERIAL or BIGSERIAL type. However I don't see
any other way which would make the sequence go out of sync.
--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Benjamin Krajmalnik wrote: > I am a newbie, so I essentially invoked pg_dump from with pgAdmin3, > with the defaults (including large objects). This is the command > being issued: > > .C:\Program Files\PostgreSQL\8.1\bin\pg_dump.exe -i -h 172.20.0.32 -p 5432 -U postgres -F c -b -v -f "C:\Documents andSettings\administrator.MS\testbk.backup" events > > What I assumed was happening (and I may have very well been wrong) was > that I was getting a consistent backup of the object at the time that > it was processed, but not the database as a whole. This command should produce a consistent dump of all the objects in the database. (Not a consistent view of each object in isolation, which is AFAIU what you are saying.) Next question is, how are you restoring this dump? -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support
pg_restore: connecting to database for restore
pg_restore: creating SCHEMA public
pg_restore: creating COMMENT SCHEMA public
pg_restore: creating PROCEDURAL LANGUAGE plpgsql
pg_restore: creating TABLE appointments
pg_restore: executing SEQUENCE SET appointments_id_seq
pg_restore: restoring data for table "appointments"
pg_restore: setting owner and privileges for SCHEMA public
pg_restore: setting owner and privileges for COMMENT SCHEMA public
pg_restore: setting owner and privileges for ACL public
pg_restore: setting owner and privileges for PROCEDURAL LANGUAGE plpgsql
pg_restore: setting owner and privileges for TABLE appointments
From: Alvaro Herrera [mailto:alvherre@commandprompt.com]
Sent: Thu 4/20/2006 10:35 PM
To: Benjamin Krajmalnik
Cc: Tom Lane; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Howto: Using PITR recovery for standby replication
Benjamin Krajmalnik wrote:
> I am a newbie, so I essentially invoked pg_dump from with pgAdmin3,
> with the defaults (including large objects). This is the command
> being issued:
>
> .C:\Program Files\PostgreSQL\8.1\bin\pg_dump.exe -i -h 172.20.0.32 -p 5432 -U postgres -F c -b -v -f "C:\Documents and Settings\administrator.MS\testbk.backup" events
>
> What I assumed was happening (and I may have very well been wrong) was
> that I was getting a consistent backup of the object at the time that
> it was processed, but not the database as a whole.
This command should produce a consistent dump of all the objects in the
database. (Not a consistent view of each object in isolation, which is
AFAIU what you are saying.)
Next question is, how are you restoring this dump?
--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
From: pgsql-admin-owner@postgresql.org on behalf of Benjamin Krajmalnik
Sent: Thu 4/20/2006 9:52 PM
To: Tom Lane
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Howto: Using PITR recovery for standby replication
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Thu 4/20/2006 9:09 PM
To: Benjamin Krajmalnik
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Howto: Using PITR recovery for standby replication
"Benjamin Krajmalnik" <kraj@illumen.com> writes:
> I have tried using pg_dump, but discovered that the backup was not a =
> consistent backup.
Really?
> Back to the problem I faced when testing backups with pg_dump, it =
> appears that the backup was not a consistent backup of the data. For =
> example, sequences which are used by some tables bo longer held the =
> correct values (the tables now held higher values),
Sequences are non-transactional, so pg_dump might well capture a higher
value of the sequence counter than is reflected in any table row, but
there are numerous other ways by which a gap can appear in the set of
sequence values. That's not a bug. If you've got real discrepancies
in pg_dump's output, a lot of us would like to know about 'em.
regards, tom lane