Thread: BAcking up a Postgres Database
Will the postgres community ever consider creating a decent backup capability. Currently the way to create backups is through a Cron job. For Postgres to ever be considered a true production systen then some sort of transactional tracing has to be done. Otherwise you risk the potential of losing quite a bit of data. Dave Johnson
> > Will the postgres community ever consider creating a decent backup > capability. Currently the way to create backups is through a Cron job. > For Postgres to ever be considered a true production systen then some > sort of transactional tracing has to be done. Otherwise you risk the > potential of losing quite a bit of data. > > Dave Johnson > Who says that fancy user interfaces to backup services and schedule managers, do any other thing than setting up a big big crontab file ?? This is just what ADABAS or SAPDB do (and we should consider R/3-enabled dbs production systems !?). Hans --------------------------------------------------------------------- PD Dr. Hans C Waldmann Methodology & Applied Statistics in Psychology & the Health Sciences ZFRF / University of Bremen / Grazer Str 6 / 28359 Bremen / Germany waldmann@samson.fire.uni-bremen.de / http://samson.fire.uni-bremen.de friend of: AIX PERL POSTGRES ADABAS SAS TEX ---------------------------------------------------------------------
On Wed, Jan 10, 2001 at 05:57:21AM -0600, D Johnson wrote: > Will the postgres community ever consider creating a decent backup > capability. Currently the way to create backups is through a Cron job. > For Postgres to ever be considered a true production systen then some > sort of transactional tracing has to be done. Otherwise you risk the > potential of losing quite a bit of data. You can take a snapshot of the database-device (while the database is down), and backup from the snapshot to avoid this problem. You need a volum-manager that support snapshot, but AFAIK most do. -- Ragnar Kjørstad BigStorage
Does this provide true "point of failure" recovery? This sounds like no more than a cold backup, which does not provide "point of failure" recovery. I think the original question is very valid. Postgres does not, to my knowledge, support transaction logging, which is necessary for this style of recovery. In Oracle, you restore the data files from a previous backup and then re-apply the transaction (archive) logs, a process called "rolling forward", then you can open the database for use, and it is in the state just prior to the failure. I've seen some creative dialogue on this list about writing to multiple database instances to have a live backup, and some regarding logging each SQL statement, but the introduction of a transaction archiver into the engine itself would make this process much easier and make Postgres more attractive to sites currently using the major commercial database packages, IMHO. Let me know if any of this is blatantly incorrect. Tim White Ragnar Kjørstad wrote: > On Wed, Jan 10, 2001 at 05:57:21AM -0600, D Johnson wrote: > > Will the postgres community ever consider creating a decent backup > > capability. Currently the way to create backups is through a Cron job. > > For Postgres to ever be considered a true production systen then some > > sort of transactional tracing has to be done. Otherwise you risk the > > potential of losing quite a bit of data. > > You can take a snapshot of the database-device (while the database is > down), and backup from the snapshot to avoid this problem. > > You need a volum-manager that support snapshot, but AFAIK most do. > > -- > Ragnar Kjørstad > BigStorage
Yes, the first method will not be valid because you cannot just take down your system anytime you need to do a backup, plus it's risky anytime you stop and start a large server application that manages data. The pg_dump method would be better. I have worked extensively with Oracle database apps. that collect data 24/7. This test data is highly critical and you need to have a transactional logging system, recovery and roll forward capability as Tim indicated. We have saved our rear ends a number times thanks to Oracles archiving. I was wondering about pg_dump and transactional recovery and making it similar to Oracles "Rman" recovery app and recovery database. This would probably make Postgres robust in that it would provide good integrity and allow you to develop drivers for various backup devices i.e tape drives. Dave Johnson Tim White wrote: > Does this provide true "point of failure" recovery? This sounds like no > more than a cold backup, > which does not provide "point of failure" recovery. I think the original > question is very valid. Postgres > does not, to my knowledge, support transaction logging, which is necessary > for this style of recovery. > In Oracle, you restore the data files from a previous backup and then > re-apply the transaction (archive) > logs, a process called "rolling forward", then you can open the database > for use, and it is in the state > just prior to the failure. I've seen some creative dialogue on this list > about writing to multiple database > instances to have a live backup, and some regarding logging each SQL > statement, but the introduction > of a transaction archiver into the engine itself would make this process > much easier and make Postgres > more attractive to sites currently using the major commercial database > packages, IMHO. > > Let me know if any of this is blatantly incorrect. > > Tim White > > Ragnar Kjørstad wrote: > > > On Wed, Jan 10, 2001 at 05:57:21AM -0600, D Johnson wrote: > > > Will the postgres community ever consider creating a decent backup > > > capability. Currently the way to create backups is through a Cron job. > > > For Postgres to ever be considered a true production systen then some > > > sort of transactional tracing has to be done. Otherwise you risk the > > > potential of losing quite a bit of data. > > > > You can take a snapshot of the database-device (while the database is > > down), and backup from the snapshot to avoid this problem. > > > > You need a volum-manager that support snapshot, but AFAIK most do. > > > > -- > > Ragnar Kjørstad > > BigStorage
On Thu, Jan 11, 2001 at 08:57:26AM -0600, Tim White wrote: > In Oracle, you restore the data files from a previous backup and then > re-apply the transaction (archive) > logs, a process called "rolling forward", then you can open the database > for use, and it is in the state > just prior to the failure. I've seen some creative dialogue on this list And where do you get this transaction log? hmmmm let me guess: From a backup? mrc -- Mike Castle Life is like a clock: You can work constantly dalgoda@ix.netcom.com and be right all the time, or not work at all www.netcom.com/~dalgoda/ and be right at least twice a day. -- mrc We are all of us living in the shadow of Manhattan. -- Watchmen
On Thu, Jan 11, 2001 at 08:57:26AM -0600, Tim White wrote: > Does this provide true "point of failure" recovery? This sounds like no > more than a cold backup, > which does not provide "point of failure" recovery. Yes, this is only for regular backup (but it doesn't require a long downtime for you DB-server) -- Ranar Kjørstad Big Storage
On Mon, Jan 15, 2001 at 07:06:41AM -0600, Tim White wrote: > Yes, you do restore the archive logs from some backup medium. But I don't > understand you point, the discussion relates to transaction logging, meaning > that as dml statements are executed and committed they are logged, thus making > it possible to re-apply them. To my knowledge Postgres does not do > transaction logging, this makes it nearly impossible to do point of failure > recovery without doing a pg_dump after every transaction, which is rather > impractical. Then perhaps it's a lack of understanding on my point. Define "point of failure recovery." I'm imagining something like a system crash of some sort (maybe an internal database error and the database is corrupted somehow). So you reinstall the latest back up. Then you apply the transaction log. Is that what you're saying? Well, if the system crashed, how can you guarantee the integrity of the transaction log in order the forward apply it? If the database system crashed due to an internal error, and corrupted the database files, how do you know the transaction log wasn't corrupted? If the OS crashed, and corrupted the database, how do you know the transaction log wasn't corrupted? The only time I could see how you could guarantee that a transaction log was not corrupt was if it's on a separate physical media and you had a media issue. Wish can largely be overcome with using something like RAID anyway. Can you give me a scenario where you a transaction log is actually useful and you can guarantee it is not corrupt? mrc -- Mike Castle Life is like a clock: You can work constantly dalgoda@ix.netcom.com and be right all the time, or not work at all www.netcom.com/~dalgoda/ and be right at least twice a day. -- mrc We are all of us living in the shadow of Manhattan. -- Watchmen
Yes, you do restore the archive logs from some backup medium. But I don't understand you point, the discussion relates to transaction logging, meaning that as dml statements are executed and committed they are logged, thus making it possible to re-apply them. To my knowledge Postgres does not do transaction logging, this makes it nearly impossible to do point of failure recovery without doing a pg_dump after every transaction, which is rather impractical. Tim White Mike Castle wrote: > On Thu, Jan 11, 2001 at 08:57:26AM -0600, Tim White wrote: > > In Oracle, you restore the data files from a previous backup and then > > re-apply the transaction (archive) > > logs, a process called "rolling forward", then you can open the database > > for use, and it is in the state > > just prior to the failure. I've seen some creative dialogue on this list > > And where do you get this transaction log? > > hmmmm > let me guess: > > >From a backup? > > mrc > -- > Mike Castle Life is like a clock: You can work constantly > dalgoda@ix.netcom.com and be right all the time, or not work at all > www.netcom.com/~dalgoda/ and be right at least twice a day. -- mrc > We are all of us living in the shadow of Manhattan. -- Watchmen