Thread: database backup

database backup

From
Sanjeev Rathore
Date:
Hi,

I am looking into how I should backup my PostgreSQL
database.  Is there a way to do incremental backup?  I
think incremental back seems to be a better solution
in the long run, because it is more scalable.  Does
anybody have any suggestions as to how I should go
about backing up my database?

thanks,
Sanjeev

__________________________________________________
Do You Yahoo!?
Sign up for SBC Yahoo! Dial - First Month Free
http://sbc.yahoo.com



Re: database backup

From
Darren Ferguson
Date:
If you are using a Unix environment then you should use cron and then
pg_dump in order to produce a complete backup of the database.

See the pg_dump --help for complete list of commands

Then the shell script in the cron job can call pg_dump with the relevant
options

HTH

On Sat, 6 Jul 2002, Sanjeev Rathore wrote:

> Hi,
>
> I am looking into how I should backup my PostgreSQL
> database.  Is there a way to do incremental backup?  I
> think incremental back seems to be a better solution
> in the long run, because it is more scalable.  Does
> anybody have any suggestions as to how I should go
> about backing up my database?
>
> thanks,
> Sanjeev
>
> __________________________________________________
> Do You Yahoo!?
> Sign up for SBC Yahoo! Dial - First Month Free
> http://sbc.yahoo.com
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>
>

--
Darren Ferguson




Re: database backup

From
Josh Jore
Date:
Well... the standard tools don't provide for that. Perhaps someone else
has written other tools that implement incremantals but I doubt it.
Perhaps some of the changes that are due for 7.3 in point in time recovery
will provide for that. In the mean time you're going to take full
snapshots and like it.

Here's a copy of my own automated backup script. You'll note that it
does each database separately; first the schema then the data + blobs.
Lastly the cluster's globals.

#!/bin/sh
rm -rf /tmp/mkiso-data.*
TMPDIR=`mktemp -d /tmp/mkiso-data.XXX`

# Create the temporary directory
mkdir $TMPDIR/cd

cd /
tar cpvXf - \
    / \
    /home/[d-kq-z]* \
    /var/[a-lnq-z]* \
    /var/mail \
    /var/msgs \
    /var/preserve \
    | gzip -c9 > $TMPDIR/cd/data.tgz

# Backup PostgreSQL separately

# dump each database schema/data separately
su -l postgresql -c "psql -At -F ' ' -U postgresql -d template1 <<__END__
SELECT datname FROM pg_database WHERE datallowconn;
__END__
" | while read DB; do
    echo "PostgreSQL db $DB"
    mkdir -p $TMPDIR/cd/postgres/$DB

    # schema
    su -l postgresql -c "pg_dump -Cs -F c -Z 9 -S postgresql $DB" \
         > $TMPDIR/cd/postgres/$DB/schema.pg

    # data
    su -l postgresql -c "pg_dump -bd -F c -Z 9 -S postgresql $DB" \
         > $TMPDIR/cd/postgres/$DB/data.pg
done
# dump all globals (users/groups)
su -l postgresql -c "pg_dumpall -g" \
 > $TMPDIR/cd/postgres/globals.sql


# Backup MySQL separately


cd $TMPDIR/cd
mkisofs -o $TMPDIR/image.iso -v . 1> /dev/null
cd $TMPDIR
rm -rf $TMPDIR/cd

/root/bin/burnimage $TMPDIR/image.iso

echo Bootable system cd is at $TMPDIR/image.iso

Joshua b. Jore ; http://www.greentechnologist.org

On Sat, 6 Jul 2002, Sanjeev Rathore wrote:

> Hi,
>
> I am looking into how I should backup my PostgreSQL
> database.  Is there a way to do incremental backup?  I
> think incremental back seems to be a better solution
> in the long run, because it is more scalable.  Does
> anybody have any suggestions as to how I should go
> about backing up my database?
>
> thanks,
> Sanjeev
>
> __________________________________________________
> Do You Yahoo!?
> Sign up for SBC Yahoo! Dial - First Month Free
> http://sbc.yahoo.com
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>
>
>




Re: database backup

From
Doug Fields
Date:
At 10:42 PM 7/6/2002, Darren Ferguson wrote:
>If you are using a Unix environment then you should use cron and then
>pg_dump in order to produce a complete backup of the database.
>
>See the pg_dump --help for complete list of commands
>
>Then the shell script in the cron job can call pg_dump with the relevant
>options

If you want to "incrementalize" it, you could always keep a base, and diff
the new dump against it, and store just the diff.

Be sure to run the output through bzip2 (or gzip -9) to save space.

Cheers,

Doug




Re: database backup

From
Lamar Owen
Date:
On Saturday 06 July 2002 10:59 pm, Doug Fields wrote:
> If you want to "incrementalize" it, you could always keep a base, and diff
> the new dump against it, and store just the diff.

> Be sure to run the output through bzip2 (or gzip -9) to save space.

This doesn't work as well in practice as it would seem.  Due to funkiness, the
output of pg_dump isn't (or wasn't the last time I tried diffing dumps)
necessarily always in the same order.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



Re: database backup

From
Martijn van Oosterhout
Date:
On Sat, Jul 06, 2002 at 11:03:12PM -0400, Lamar Owen wrote:
> On Saturday 06 July 2002 10:59 pm, Doug Fields wrote:
> > If you want to "incrementalize" it, you could always keep a base, and diff
> > the new dump against it, and store just the diff.
>
> > Be sure to run the output through bzip2 (or gzip -9) to save space.
>
> This doesn't work as well in practice as it would seem.  Due to funkiness, the
> output of pg_dump isn't (or wasn't the last time I tried diffing dumps)
> necessarily always in the same order.

Not just that, diff wants to be able to read the whole file in. Last I tried
to diff two 500MB files on a 256MB machine it was not pretty.

--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> There are 10 kinds of people in the world, those that can do binary
> arithmetic and those that can't.



Re: database backup

From
Josh Jore
Date:
Hey - you *could* consider logging your commands elsewhere and use *that*
for incrementals. It wouldn't have all the nice features of transactions
so it might take some manual effor to recover but you may not need
anything beyond a naive implementation.

So alter your application to save all SQL that can modify the database and
use that to augment your normal scheduled backups.

Joshua b. Jore ; http://www.greentechnologist.org

On Sun, 7 Jul 2002, Martijn van Oosterhout wrote:

> On Sat, Jul 06, 2002 at 11:03:12PM -0400, Lamar Owen wrote:
> > On Saturday 06 July 2002 10:59 pm, Doug Fields wrote:
> > > If you want to "incrementalize" it, you could always keep a base, and diff
> > > the new dump against it, and store just the diff.
> >
> > > Be sure to run the output through bzip2 (or gzip -9) to save space.
> >
> > This doesn't work as well in practice as it would seem.  Due to funkiness, the
> > output of pg_dump isn't (or wasn't the last time I tried diffing dumps)
> > necessarily always in the same order.
>
> Not just that, diff wants to be able to read the whole file in. Last I tried
> to diff two 500MB files on a 256MB machine it was not pretty.
>
> --
> Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> > There are 10 kinds of people in the world, those that can do binary
> > arithmetic and those that can't.
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>
>
>




Re: database backup

From
tony
Date:
I have a small database: dump file of about 1.6 Mb

I want to do an offsite backup every weekday. I thought of rsync but the
client runs only Mac desktops.

I would like to bzip the dump file and send it by mail using cron. Is
that easy to do?

Cheers

Tony Grant

--
RedHat Linux on Sony Vaio C1XD/S
http://www.animaproductions.com/linux2.html
Macromedia UltraDev with PostgreSQL
http://www.animaproductions.com/ultra.html




Re: database backup

From
Josh Jore
Date:
Oh heck yeah. So here's a script for you (I wrote it mostly in the mail
client so you have to do your own testing). Now schedule that in cron via
a line like:

@daily /....command



#!/bin/sh
rm -rf /tmp/pg_data.*
TMPDIR=`mktemp -d /tmp/pg_data.XXX`
SENDTO=somone@somewhere

# Create the temporary directory
mkdir $TMPDIR/postgres

# dump each database schema/data separately
su -l postgresql -c "psql -At -F ' ' -U postgresql -d template1 <<__END__
SELECT datname FROM pg_database WHERE datallowconn;
__END__
" | while read DB; do
    echo "PostgreSQL db $DB"
    mkdir -p $TMPDIR/postgres/$DB

    # schema
    su -l postgresql -c "pg_dump -Cs -F c -Z 9 -S postgresql $DB" \
         > $TMPDIR/postgres/$DB/schema.pg

    # data
    su -l postgresql -c "pg_dump -bd -F c -Z 9 -S postgresql $DB" \
         > $TMPDIR/postgres/$DB/data.pg
done
# dump all globals (users/groups)
su -l postgresql -c "pg_dumpall -g" \
 > $TMPDIR/postgres/globals.sql

# Create a archive, bzip it an mail it
tar cf - $TMPDIR/postgres | \
bzip -9 | \
perl -Mstrict -Mwarnings -MMIME::Base64 -e \
'my $buf;binmode STDIN;while(read(STDIN, $buf, 60*57)) {print
encode_base64($buf)}' | \
mail -s "Pg backup $SENDTO

Joshua b. Jore ; http://www.greentechnologist.org

On 7 Jul 2002, tony wrote:

> I have a small database: dump file of about 1.6 Mb
>
> I want to do an offsite backup every weekday. I thought of rsync but the
> client runs only Mac desktops.
>
> I would like to bzip the dump file and send it by mail using cron. Is
> that easy to do?
>
> Cheers
>
> Tony Grant
>
> --
> RedHat Linux on Sony Vaio C1XD/S
> http://www.animaproductions.com/linux2.html
> Macromedia UltraDev with PostgreSQL
> http://www.animaproductions.com/ultra.html
>
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>
>
>




Re: database backup

From
Curt Sampson
Date:
On Sat, 6 Jul 2002, Doug Fields wrote:

> >Then the shell script in the cron job can call pg_dump with the relevant
> >options
>
> If you want to "incrementalize" it, you could always keep a base, and diff
> the new dump against it, and store just the diff.

Actually, I was wondering just yesterday, is there any reason that
pg_dump should not be able to do an incremental dump? You could just
give it a transaction ID and say, "dump all changes from this ID."

cjs
--
Curt Sampson  <cjs@cynic.net>   +81 90 7737 2974   http://www.netbsd.org
    Don't you know, in this new Dark Age, we're all light.  --XTC




Date Data Type

From
"Jillian Carroll"
Date:
I have a database upon which I am trying to do the following query:

select newsitem, category, link, datebox
from news_tbl
order by entered desc

It is very straightforward.  Newsitem, Category and Link are all text data
types and datebox is date data type.

The query only works when datebox is not included.

I am running Postgres 7.2 on Redhad 7.2.

Is there a problem with the date data type?

--
Jillian




Re: database backup

From
Bruce Momjian
Date:
Curt Sampson wrote:
> On Sat, 6 Jul 2002, Doug Fields wrote:
>
> > >Then the shell script in the cron job can call pg_dump with the relevant
> > >options
> >
> > If you want to "incrementalize" it, you could always keep a base, and diff
> > the new dump against it, and store just the diff.
>
> Actually, I was wondering just yesterday, is there any reason that
> pg_dump should not be able to do an incremental dump? You could just
> give it a transaction ID and say, "dump all changes from this ID."

I think the problem is matching up rows in the old dump with the
incremental rows.

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026



Re: database backup

From
tony
Date:
On Sun, 2002-07-07 at 12:39, Josh Jore wrote:
> Oh heck yeah. So here's a script for you (I wrote it mostly in the mail
> client so you have to do your own testing). Now schedule that in cron via
> a line like:
Thanks Josh.

Inspiring myself from that I split the task into three cron jobs and am
using mutt rather than perl.

#!/bin/sh

SENDTO=tony@animaproductions.com
SUBJECT="Base de données - sauvegarde"
ATTFILE=f3c.out.bz2
TXTFILE="daily backup of your database"

# bzip it and mail it
cd /my/directory/path
bzip2 -9 f3c.out
mutt -a $ATTFILE -s "$SUBJECT" $SENDTO < $TXTFILE

I dump the database to f3c.out every day just after the end of the
working day, I mail it later and then I delete the dump the next day
before dumping again. This way I can get the dump file myself if I need
to by scp (I really don't need a copy myself).

The versioning of the database backup and getting it on a CD from time
to time is left to the client.

http://www3.primushost.com/~kylet/unix-att.html

Has all sorts of scripts for attaching stuff to e-mail

Thanks again

Cheers

Tony
--
RedHat Linux on Sony Vaio C1XD/S
http://www.animaproductions.com/linux2.html
Macromedia UltraDev with PostgreSQL
http://www.animaproductions.com/ultra.html




Re: Date Data Type

From
Al Arduengo
Date:
Does the output not even include the column header? If it does, it
would seem your datebox column has nothing but NULLs in it. Can you
ever get it to provide values from the datebox column?  If it does
*not* show, it would probably be necessary to have a copy of at least
part of the table in question.

Rgds,
-Al Arduengo

jillian@koskie.com ("Jillian Carroll") writes:

> I have a database upon which I am trying to do the following query:
>
> select newsitem, category, link, datebox
> from news_tbl
> order by entered desc
>
> It is very straightforward.  Newsitem, Category and Link are all text data
> types and datebox is date data type.
>
> The query only works when datebox is not included.
>
> I am running Postgres 7.2 on Redhad 7.2.
>
> Is there a problem with the date data type?
>
> --
> Jillian
>
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
>
>

--
"Place your hand on a hot stove for a  minute and it seems like
an hour. Sit with a pretty girl for an hour and it seems like a
minute. That's relativity."  -Albert Einstein

Re: Date Data Type

From
Thomas Lockhart
Date:
> Is there a problem with the date data type?

No. What do you mean by "only works when datebox is not included"?

                - Thomas