Thread: backup/restore - another area.

backup/restore - another area.

From
Jeff
Date:
Boy, I must be getting annoying by now huh?

Anyway, after the joys of Solaris being fast I'm moving onto another area
- backup & restore.  I've been checking the archives and haven't seen any
"good" tips for backing up big databases (and more importantly,
restoring).

I've noticed while doing a backup (with both -Fc and regular recipe) that
my IO is no where near being stressed.  According to vmstat, it sits
around reading about 512kB/sec (with occasional spikes) and every 5-6
seconds it writes out a 3MB hunk.

So as a test I decided to cp a 1GB file and got a constant read speed of
20MB/sec and the writes. well. were more sporatic (buffering most likely)
and it would write out 60MB every 3 seconds.

And. then.. on the restore I notice similar things - IO hardly being
stressed at all... reading in at ~512kB/sec and every now and then writing
out a few MB.


So, I've been thinking of various backup/restore strategies... some I'm
sure some people do, some need code written and may be controvertial..

Idea #1:
Use an LVM and take a snapshop - archive that.
From the way I see it. the downside is the LVM will use a lot of space
until the snapshot is removed.  Also PG may be in a slightly inconsistant
state - but this should "appear" to PG the same as if the power went out.

For restore, simply unarchive this snapshot and point postgres at it. Let
it recover and you are good to go.

Little overhead from what I see...
I'm leaning towards this method the more I think of it.

Idea #2:

a new program/internal "system". Lets call it pg_backup. It would generate
a very fast backup (that restores very fast) at the expense of disk space.
Pretty much what we would do is write out new copies of all the pages in
the db - both indexes and tables.

the pro's to this is it does not depend on an LVM and therefore is
accessable to all platforms.  it also has the other benfets mentioned
above, except speed.

For a restore PG would need something like a 'restore mode' where we can
just have it pump pages into it somehow.. It would not have to build
index, check constraints, and all that because by definition the backup
would contain valid data.

The downside for both of these are that the backup is only good for that
version of PG on that architecture.  Speaking in Informix world this is
how it is - it has a fast backup & fast restore that does essentially #2
and then it has export/import options (works like our current pg_dump and
restore).

and oh yeah -I've tried disabling fsync on load and while it did go faster
it was only 2 minutes faster (9m vs 11m).

Any thoughts on this? What do you ther folk with big db's do?

--
Jeff Trout <jeff@jefftrout.com>
http://www.jefftrout.com/
http://www.stuarthamm.net/



Re: backup/restore - another area.

From
Greg Stark
Date:

Jeff <threshar@torgo.978.org> writes:

> Idea #1:
> Use an LVM and take a snapshop - archive that.
> From the way I see it. the downside is the LVM will use a lot of space
> until the snapshot is removed.  Also PG may be in a slightly inconsistant
> state - but this should "appear" to PG the same as if the power went out.
>
> For restore, simply unarchive this snapshot and point postgres at it. Let
> it recover and you are good to go.
>
> Little overhead from what I see...
> I'm leaning towards this method the more I think of it.

I don't quite follow your #2 so I can only comment on the above idea of using
an LVM snapshot. If you have the hardware and the LVM-fu to be able to do this
properly I would recommend it.

We actually used to do this with veritas even on Oracle which has full online
backup support simply because it was much much faster and the snapshot could
be backed up during peak times without any significant performance impact.
That's partly because Veritas and Hitachi storage systems kick butt though.
Depending on the systems you're considering you may or may not have nearly the
same success.

Note, you should *test* this backup. You're depending on some subtle semantics
with this. If you do it slightly wrong or the LVM does something slightly
wrong and you end up with an inconsistent snapshot or missing some critical
file the whole backup could be useless.

Also, I wouldn't consider this a replacement for having a pg_dump export. In a
crisis when you want to restore everything *exactly* the way things were you
want the complete filesystem snapshot. But if you just want to load a table
the way it was the day before to compare, or if you want to load a test box to
do some performance testing, or whatever, you'll need the logical export.

--
greg

Re: backup/restore - another area.

From
Jeff
Date:
On 9 Oct 2003, Greg Stark wrote:

> I don't quite follow your #2 so I can only comment on the above idea of using
> an LVM snapshot. If you have the hardware and the LVM-fu to be able to do this
> properly I would recommend it.
>
Just to be a bit clearer incase it was my wording:

Method #2 is nearly identical to method #1, except that no logical volume
manager is needed.  We cannot just cp $PGDATA because it is (or could be)
changing and we need to take data from a constitant point.   So what we do
is write code that understands xids and all that and simply "dumps" out
the pages of data in a raw form that can be quickly reloaded.  The key is
that the data be in a somewhat consistant state.  Method #2 requires a ton
more work but it would be able to run on platforms without an lvm (or
requiring the use of an lvm).   Does that make more sense?

The idea here is to backup & restore as fast as possible, throwing away
some things like inter-version compat and whatnot.  Being able to add
"fast backup / restore" is a good thing in the list of enterprise
features.

> Also, I wouldn't consider this a replacement for having a pg_dump export. In a
> crisis when you want to restore everything *exactly* the way things were you
> want the complete filesystem snapshot. But if you just want to load a table
> the way it was the day before to compare, or if you want to load a test box to
> do some performance testing, or whatever, you'll need the logical export.
>

Yeah, a pg_dump now and then would be useful (and safe).
If you wanted to get fancy schmancy you could take the snapshot, archive
it, transfer it and unarchive it on machine B. (We actually used to do
that here until machine B no longer had the capacity to hold all our data
:)

--
Jeff Trout <jeff@jefftrout.com>
http://www.jefftrout.com/
http://www.stuarthamm.net/



Re: backup/restore - another area.

From
markw@osdl.org
Date:
Jeff,

I'm curious to what kind of testing you've done with LVM.  I'm not
currently trying any backup/restore stuff, but I'm running our DBT-2
workload using LVM.  I've started collecting vmstat, iostat, and
readprofile data, initially running disktest to gauge the performance.

For anyone curious, I have some data on a 14-disk volume here:
    http://developer.osdl.org/markw/lvm/results.4/log/

and a 52-disk volume here:
    http://developer.osdl.org/markw/lvm/results.5/data/

Mark

>Jeff <threshar@torgo.978.org> writes:
>
> Idea #1:
> Use an LVM and take a snapshop - archive that.
> From the way I see it. the downside is the LVM will use a lot of space
> until the snapshot is removed.  Also PG may be in a slightly inconsistant
> state - but this should "appear" to PG the same as if the power went out.
>
> For restore, simply unarchive this snapshot and point postgres at it. Let
> it recover and you are good to go.
>
> Little overhead from what I see...
> I'm leaning towards this method the more I think of it.

Re: backup/restore - another area.

From
Jeff
Date:
On Tue, 14 Oct 2003, markw@osdl.org wrote:

> I'm curious to what kind of testing you've done with LVM.  I'm not
> currently trying any backup/restore stuff, but I'm running our DBT-2
> workload using LVM.  I've started collecting vmstat, iostat, and
> readprofile data, initially running disktest to gauge the performance.
>
[added -admin to this, since this is very relevant there]

I was going to post this data yesterday, but I had severe inet issues.

So, I tried this out with lvm2 on 2.4.21 on a 2xp2-450 with 2 disks.
(I just looked at your 14 and 52 disk data. drool.)

So I have a db which is about 3.2GB on disk.
All backups were done to an nfs mount, but I ran a network monitor to
check bandwidth usage. I note where things were io bound.

backing up:

pg_dump: 18m             [cpu bound]
pg_dump | gzip -1: 18m   [cpu bound]

snapshot, then tar: 4m         [io bound]
snapshot, then tar | gzip: 21m [cpu bound]

The times for a compressed backup are a bit slower for snapshots, but this
is where the snapshot method wins tacos - restore.

restore:

psql:      158m
snapshot:    8m

Yes folks, 8m.
When I started PG back up it checked the WAL and got itself back online.

The benefits of the pg_dump backup afaict are that the data is in a format
readable to anything and is [mostly] cross-pg compatible. The downside is
it seems to be quite slow and restoring it can be long and tiresome.

The benefits of the snapshot are that backups are very, very quick and
restore is very, very quick (It won't need to re-enable foriegn keys, no
need to rebuild indexes, no need to re-vacuum analyze). The downside is
this method will only work on that specific version of PG and it isn't the
"cleanest" thing in the world since you are essentially simulating a power
failure to PG. Luckly the WAL works like a champ. Also, these backups can
be much larger since it has to include the indexes as well. but this is a
price you have to pay.

I did have some initial problems with snapshots & corruption but it turned
out to be user-error on my part.

COOL HUH?

--
Jeff Trout <jeff@jefftrout.com>
http://www.jefftrout.com/
http://www.stuarthamm.net/



Re: backup/restore - another area.

From
Josh Berkus
Date:
Jeff,

> The downside is
> this method will only work on that specific version of PG and it isn't the
> "cleanest" thing in the world since you are essentially simulating a power
> failure to PG. Luckly the WAL works like a champ. Also, these backups can
> be much larger since it has to include the indexes as well. but this is a
> price you have to pay.

The other downside is, of course, that the database needs to be shut down.

> COOL HUH?

Certainly very useful in the DBA's arsenal of backup tools.

--
Josh Berkus
Aglio Database Solutions
San Francisco

Re: backup/restore - another area.

From
Jeff
Date:
On Thu, 16 Oct 2003 09:49:59 -0700
Josh Berkus <josh@agliodbs.com> wrote:

> Jeff,
>
> > The downside is
> > this method will only work on that specific version of PG and it
> > isn't the"cleanest" thing in the world since you are essentially
> > simulating a power failure to PG. Luckly the WAL works like a champ.
> > Also, these backups can be much larger since it has to include the
> > indexes as well. but this is a price you have to pay.
>
> The other downside is, of course, that the database needs to be shut
> down.
>

I left the DB up while doing this.

Even had a program sitting around committing data to try and corrupt
things. (Which is how I discovered I was doing the snapshot wrong)

You could  do pg_ctl stop; snapshot; pg_ctls tart for a "clean" image.

--
Jeff Trout <jeff@jefftrout.com>
http://www.jefftrout.com/
http://www.stuarthamm.net/

Re: backup/restore - another area.

From
Josh Berkus
Date:
Jeff,

> I left the DB up while doing this.
>
> Even had a program sitting around committing data to try and corrupt
> things. (Which is how I discovered I was doing the snapshot wrong)

Really?   I'm unclear on the method you're using to take the snapshot, then; I
seem to have missed a couple posts on this thread.   Want to refresh me?

--
Josh Berkus
Aglio Database Solutions
San Francisco

Re: backup/restore - another area.

From
Jeff
Date:
On Thu, 16 Oct 2003 10:09:27 -0700
Josh Berkus <josh@agliodbs.com> wrote:

> Jeff,
>
> > I left the DB up while doing this.
> >
> > Even had a program sitting around committing data to try and corrupt
> > things. (Which is how I discovered I was doing the snapshot wrong)
>
> Really?   I'm unclear on the method you're using to take the snapshot,
> then; I seem to have missed a couple posts on this thread.   Want to
> refresh me?
>

I have a 2 disk stripe LVM on /dev/postgres/pgdata/

lvcreate -L4000M -s -n pg_backup /dev/postgres/pgdata
mount /dev/postgres/pg_backup /pg_backup
tar cf - /pg_backup | gzip -1 > /squeegit/mb.backup
umount /pg_backup;
lvremove -f /dev/postgres/pg_backup;

In a nutshell an LVM snapshot is an atomic operation that takes, well, a
snapshot of hte FS as it was at that instant.  It does not make a 2nd
copy of the data.   This way you can simply tar up the pgdata directory
and be happy as the snapshot will not be changing due to db activity.

--
Jeff Trout <jeff@jefftrout.com>
http://www.jefftrout.com/
http://www.stuarthamm.net/