Thread: High Reliability without High Availability?

High Reliability without High Availability?

From
Al Cohen
Date:
We've been using PostgreSQL for some time, and it's been very, very
reliable.  However, we're starting to think about preparing for
something bad happening - dead drives, fires, locusts, and whatnot.

In our particular situation, being down for two hours or so is OK.
What's really bad is losing data.

The PostgreSQL replication solutions that we're seeing are very clever,
but seem to require significant effort to set up and keep going.  Since
we don't care if a slave DB is ready to kick over at a moment's notice,
I'm wondering if there is some way to generate data, in real time, that
would allow an offline rebuild in the event of catastrophe.  We could
copy this data across the 'net as it's available, so we could be OK even
if the place burned down.

Is there a log file that does or could do this?  Or some internal system
table that we could use to generate something?

Thanks!

Al Cohen

Re: High Reliability without High Availability?

From
merlyn@stonehenge.com (Randal L. Schwartz)
Date:
>>>>> "Al" == Al Cohen <amc79@no.junk.please.cornell.edu> writes:

Al> Is there a log file that does or could do this?  Or some internal
Al> system table that we could use to generate something?

I may be just mis-remembering, but wasn't there an embedded-Perl
solution that connected up as triggers for all your changing items to
either write a log, or use DBI to actually update the second database?

--
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
<merlyn@stonehenge.com> <URL:http://www.stonehenge.com/merlyn/>
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!