Re: Pg_dumpall - Mailing list pgsql-general

From Andrew Gould
Subject Re: Pg_dumpall
Date
Msg-id 20030611211839.45075.qmail@web13407.mail.yahoo.com
Whole thread Raw
In response to Re: Pg_dumpall  (<btober@seaworthysys.com>)
List pgsql-general
--- btober@seaworthysys.com wrote:
>
> > I have cron execute a Python script as the
> database
> > administrator to vacuum and backup all databases.
> > Rather than dump all databases at once, however,
> the
> > script performs a 'pgsql -l' to get a current list
> of
> > databases.  Each database is dumped and piped into
> > gzip for compression into its own backup file.
> >
> > I should also mention that the script renames all
> > previous backup files, all ending in *.gz, to
> > *.gz.old; so that they survive the current
> pg_dump.
> > Of course, you could change the script to put the
> date
> > in the file name as to keep unlimited backup
> versions.
>
> FWIW, another good way to handle the last paragraph
> would be to use
> logrotate. It would handle renaming files as *.1,
> *.2,... and you could
> specify the number of days you wanted it to retain
> and you don't have to
> go in periodically and delete ancient backups so
> that your drive doesn't
> fill up.
>

Thanks, I think I'll modify the script to manage a
declared number of backups as described above.

Logrotate sounds like FreeBSD's Newsyslog.conf.  The
reason I don't use it is that I would have to
configure each database's backup file. The Python
script adds new databases and backup files to the
process automatically.  This is one of those "if I get
hit by a bus" features.  As my databases do not have
IS support, my boss insists on contingency planning.

Best regards,

Andrew Gould

pgsql-general by date:

Previous
From: Dennis Gearon
Date:
Subject: Re: Postgres performance comments from a MySQL user
Next
From: "Darko Prenosil"
Date:
Subject: Re: Postgres performance comments from a MySQL user