Thread: Pg_dumpall
I'm looking for an unattended (cron job) to dump all the databases. We have both 7.3.3 and 7.0.2 (Cobalt RaQ) to do this for. I cannot grant a certain user permission to dump the databases in 7.0.2 pg_hba.conf file and, if I do this for our 7.3.3, does it mean anyone at the command prompt can access the databases using the '-U' option and it not require a password? What are most on this list doing for automated dumps? -- Robert
On Sun, Jun 08, 2003 at 22:04:26 -0400, Robert Fitzpatrick <robert@webtent.com> wrote: > I'm looking for an unattended (cron job) to dump all the databases. We > have both 7.3.3 and 7.0.2 (Cobalt RaQ) to do this for. I cannot grant a > certain user permission to dump the databases in 7.0.2 pg_hba.conf file > and, if I do this for our 7.3.3, does it mean anyone at the command > prompt can access the databases using the '-U' option and it not require > a password? What are most on this list doing for automated dumps? In 7.3.3 you can use ident authentication for local (domain socket) connections on some OS's. This is a good way to do this kind of thing.
I'm the only one with access to our database server, so I do a full trust. Jon On Tue, 10 Jun 2003, Bruno Wolff III wrote: > On Sun, Jun 08, 2003 at 22:04:26 -0400, > Robert Fitzpatrick <robert@webtent.com> wrote: > > I'm looking for an unattended (cron job) to dump all the databases. We > > have both 7.3.3 and 7.0.2 (Cobalt RaQ) to do this for. I cannot grant a > > certain user permission to dump the databases in 7.0.2 pg_hba.conf file > > and, if I do this for our 7.3.3, does it mean anyone at the command > > prompt can access the databases using the '-U' option and it not require > > a password? What are most on this list doing for automated dumps? > > In 7.3.3 you can use ident authentication for local (domain socket) > connections on some OS's. This is a good way to do this kind of thing. > > ---------------------------(end of broadcast)--------------------------- > TIP 6: Have you searched our list archives? > > http://archives.postgresql.org >
I have cron execute a Python script as the database administrator to vacuum and backup all databases. Rather than dump all databases at once, however, the script performs a 'pgsql -l' to get a current list of databases. Each database is dumped and piped into gzip for compression into its own backup file. I should also mention that the script renames all previous backup files, all ending in *.gz, to *.gz.old; so that they survive the current pg_dump. Of course, you could change the script to put the date in the file name as to keep unlimited backup versions. If you'd like the script, let me know. Best of Luck, Andrew L. Gould --- Robert Fitzpatrick <robert@webtent.com> wrote: > I'm looking for an unattended (cron job) to dump all > the databases. We > have both 7.3.3 and 7.0.2 (Cobalt RaQ) to do this > for. I cannot grant a > certain user permission to dump the databases in > 7.0.2 pg_hba.conf file > and, if I do this for our 7.3.3, does it mean anyone > at the command > prompt can access the databases using the '-U' > option and it not require > a password? What are most on this list doing for > automated dumps? > > -- > Robert > > > > ---------------------------(end of > broadcast)--------------------------- > TIP 3: if posting/reading through Usenet, please > send an appropriate > subscribe-nomail command to majordomo@postgresql.org > so that your > message can get through to the mailing list cleanly
> I have cron execute a Python script as the database > administrator to vacuum and backup all databases. > Rather than dump all databases at once, however, the > script performs a 'pgsql -l' to get a current list of > databases. Each database is dumped and piped into > gzip for compression into its own backup file. > > I should also mention that the script renames all > previous backup files, all ending in *.gz, to > *.gz.old; so that they survive the current pg_dump. > Of course, you could change the script to put the date > in the file name as to keep unlimited backup versions. FWIW, another good way to handle the last paragraph would be to use logrotate. It would handle renaming files as *.1, *.2,... and you could specify the number of days you wanted it to retain and you don't have to go in periodically and delete ancient backups so that your drive doesn't fill up. (US$0.02) ~Berend Tober
--- btober@seaworthysys.com wrote: > > > I have cron execute a Python script as the > database > > administrator to vacuum and backup all databases. > > Rather than dump all databases at once, however, > the > > script performs a 'pgsql -l' to get a current list > of > > databases. Each database is dumped and piped into > > gzip for compression into its own backup file. > > > > I should also mention that the script renames all > > previous backup files, all ending in *.gz, to > > *.gz.old; so that they survive the current > pg_dump. > > Of course, you could change the script to put the > date > > in the file name as to keep unlimited backup > versions. > > FWIW, another good way to handle the last paragraph > would be to use > logrotate. It would handle renaming files as *.1, > *.2,... and you could > specify the number of days you wanted it to retain > and you don't have to > go in periodically and delete ancient backups so > that your drive doesn't > fill up. > Thanks, I think I'll modify the script to manage a declared number of backups as described above. Logrotate sounds like FreeBSD's Newsyslog.conf. The reason I don't use it is that I would have to configure each database's backup file. The Python script adds new databases and backup files to the process automatically. This is one of those "if I get hit by a bus" features. As my databases do not have IS support, my boss insists on contingency planning. Best regards, Andrew Gould
I have the same thing in PHP if someone needs it. I wrote it originally to get around the limits of 7.3's pg_dumpall to hit against our 7.2 production database, and then got carried away and did the same thing as you, made it dump them into seperate files. For folks with lots of databases (we create a different database for each discrete application) it's nice to have individual database dumps instead of the one huge file that pg_dumpall makes. Now that it's getting re-written in C, it might be a good time to add the ability to create a file per each database... On Tue, 10 Jun 2003, Andrew Gould wrote: > I have cron execute a Python script as the database > administrator to vacuum and backup all databases. > Rather than dump all databases at once, however, the > script performs a 'pgsql -l' to get a current list of > databases. Each database is dumped and piped into > gzip for compression into its own backup file. > > I should also mention that the script renames all > previous backup files, all ending in *.gz, to > *.gz.old; so that they survive the current pg_dump. > Of course, you could change the script to put the date > in the file name as to keep unlimited backup versions. > > If you'd like the script, let me know. > > Best of Luck, > > Andrew L. Gould > > --- Robert Fitzpatrick <robert@webtent.com> wrote: > > I'm looking for an unattended (cron job) to dump all > > the databases. We > > have both 7.3.3 and 7.0.2 (Cobalt RaQ) to do this > > for. I cannot grant a > > certain user permission to dump the databases in > > 7.0.2 pg_hba.conf file > > and, if I do this for our 7.3.3, does it mean anyone > > at the command > > prompt can access the databases using the '-U' > > option and it not require > > a password? What are most on this list doing for > > automated dumps? > > > > -- > > Robert > > > > > > > > ---------------------------(end of > > broadcast)--------------------------- > > TIP 3: if posting/reading through Usenet, please > > send an appropriate > > subscribe-nomail command to majordomo@postgresql.org > > so that your > > message can get through to the mailing list cleanly > > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Don't 'kill -9' the postmaster >
On Fri, Jun 13, 2003 at 02:20:45PM -0600, scott.marlowe wrote: hi, > I have the same thing in PHP if someone needs it. I wrote it originally > to get around the limits of 7.3's pg_dumpall to hit against our 7.2 > production database, and then got carried away and did the same thing as > you, made it dump them into seperate files. i'd be more interested in an schema builder in php, which generates all CREATE statements from an hash array structure, checks for results, can cope with schema changes (i.e. field rename or fields added) and reports problems. i'm currently using such array structures for my database abstraction layer, which handles whole objects (instead of relations), allows syncing, ... $class_desc { CLSID_PERSON } = array ( type => inode, table => 'person', properties => array ( firstname => array ( type => text, not_null => 1 ), lastname => array ( type => text, not_null => 1 ), gender => array ( type => char ), birth => array ( type => date ), death => array ( type => date ) score => array ( type => integer ) ), unique_keys => array ( array ( firstname => 1, lastname => 1 ) ) ); the generated schema now could look like this: CREATE TABLE person ( "firstname" text, "lastname" text, "gender" character, "birth" date, "death" date, "score" integer ) INHERITS ( "_inode" ); -- this comes from type="inode" CREATE UNIQUE INDEX person_ui0 ON person ( inode_id ); CREATE UNIQUE INDEX person_ui1 ON person ( firstname, lastname ); ... cu -- --------------------------------------------------------------------- Enrico Weigelt == metux ITS Webhosting ab 5 EUR/Monat. UUCP, rawIP und vieles mehr. phone: +49 36207 519931 www: http://www.metux.de/ fax: +49 36207 519932 email: contact@metux.de cellphone: +49 174 7066481 smsgate: sms.weigelt@metux.de --------------------------------------------------------------------- Diese Mail wurde mit UUCP versandt. http://www.metux.de/uucp/