Re: Linux ready for high-volume databases? - Mailing list pgsql-general

From Vivek Khera
Subject Re: Linux ready for high-volume databases?
Date
Msg-id x7n0dvoygi.fsf@yertle.int.kciLink.com
Whole thread Raw
In response to  ("Gregory S. Williamson" <gsw@globexplorer.com>)
List pgsql-general
>>>>> "GS" == Greg Stark <gsstark@mit.edu> writes:

GS> the first approach. I'm thinking taking a pg_dump regularly
GS> (nightly if I can get away with doing it that infrequently)
GS> keeping the past n dumps, and burning a CD with those dumps.

Basically what I do.  I burn a set of CDs from one of my dumps once a
week, and keep the rest online for a few days.   I'm really getting
close to splurging for a DVD writer since my dumps are way too big for
a single CD.

GS> This doesn't provide what online backups do, of recovery to the
GS> minute of the crash. And I get nervous having only logical pg_dump
GS> output, no backups of the actual blocks on disk. But is that what
GS> everybody does?

Well, if you want backups of the blocks on disk, then you need to shut
down the postmaster so that it is a consistent copy.  You can't copy
the table files "live" this way.

So, yes, having the pg_dump is pretty much your safest bet to have a
consistent dump.  And using a replicated slave with, eg, eRServer, is
also another way, but that requires more hardware.



--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.                Khera Communications, Inc.
Internet: khera@kciLink.com       Rockville, MD       +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

pgsql-general by date:

Previous
From: Jonathan Bartlett
Date:
Subject: Re: deleting referenced data
Next
From: Rudy Koento
Date:
Subject: postgresql not using index even though it's faster