Not that my DB is that big.. but if it was and it contained any sort of
financial data (something that you might want to dispute 2 years down
the road) then I would have multiple replicated systems (which I do have
.. but they are MSSQL) and I would also be backing the data up to an
offsite storage.. either via tape or another box with enough storage
space. Your best bet is to have geographical redundancy.
Travis
-----Original Message-----
From: Lincoln Yeoh [mailto:lyeoh@pop.jaring.my]
Sent: Sunday, September 14, 2003 10:20 AM
To: Lamar Owen
Cc: PgSQL General ML
Subject: Re: need for in-place upgrades (was Re: [GENERAL] State of
>At 07:16 PM 9/13/2003 -0400, Lamar Owen wrote:
>'migration' server. And I really don't want to think about
dump/restore
>of 100TB (if PostgreSQL actually stores the image files, which it
might).
Hmm. Just curious, do people generally backup 100TB of data, or once
most
reach this point they have to hope that it's just hardware failures
they'll
deal with and not software/other issues?
100TB sounds like a lot of backup media and time... Not to mention
ensuring
that the backups will work with available and functioning backup
hardware.
Head hurts just to think about it,
Link.
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org