Re: Linux ready for high-volume databases? - Mailing list pgsql-general

From Greg Stark
Subject Re: Linux ready for high-volume databases?
Date
Msg-id 87smnnn2em.fsf@stark.dyndns.tv
Whole thread Raw
In response to Re: Linux ready for high-volume databases?  (Dennis Gearon <gearond@fireserve.net>)
Responses Re: Linux ready for high-volume databases?  (Ron Johnson <ron.l.johnson@cox.net>)
Re: Linux ready for high-volume databases?  (Andrew Sullivan <andrew@libertyrms.info>)
List pgsql-general
Dennis Gearon <gearond@fireserve.net> writes:

> With the low cost of disks, it might be a good idea to just copy to disks, that
> one can put back in.

Uh, sure, using hardware raid 1 and breaking one set of drives out of the
mirror to perform the backup is an old trick. And for small databases backups
are easy that way. Just store a few dozen copies of the pg_dump output on your
live disks for local backups and burn CD-Rs for offsite backups.

But when you have hundreds of gigabytes of data and you want to be able to
keep multiple snapshots of your database both on-site and off-site... No, you
can't just buy another hard drive and call it a business continuity plan.

As it turns out my current project will be quite small. I may well be adopting
the first approach. I'm thinking taking a pg_dump regularly (nightly if I can
get away with doing it that infrequently) keeping the past n dumps, and
burning a CD with those dumps.

This doesn't provide what online backups do, of recovery to the minute of the
crash. And I get nervous having only logical pg_dump output, no backups of the
actual blocks on disk. But is that what everybody does?

--
greg

pgsql-general by date:

Previous
From: Greg Stark
Date:
Subject: Re: move to usenet?
Next
From: "Shridhar Daithankar"
Date:
Subject: Re: Linux ready for high-volume databases?