Re: need for in-place upgrades (was Re: State of - Mailing list pgsql-general

From Christopher Browne
Subject Re: need for in-place upgrades (was Re: State of
Date
Msg-id m38yorxjq5.fsf@wolfe.cbbrowne.com
Whole thread Raw
In response to Re: need for in-place upgrades (was Re: State of Beta 2)  ("Marc G. Fournier" <scrappy@postgresql.org>)
Responses Re: need for in-place upgrades (was Re: State of
List pgsql-general
After a long battle with technology,martin@bugs.unl.edu.ar (Martin Marques), an earthling, wrote:
> El Dom 14 Sep 2003 12:20, Lincoln Yeoh escribió:
>> >At 07:16 PM 9/13/2003 -0400, Lamar Owen wrote:
>> >'migration' server.  And I really don't want to think about dump/restore
>> >of 100TB (if PostgreSQL actually stores the image files, which it might).
>>
>> Hmm. Just curious, do people generally backup 100TB of data, or once most
>> reach this point they have to hope that it's just hardware failures they'll
>> deal with and not software/other issues?
>
> Normally you would have a RAID with mirroring and CRC, so that if one of the
> disks in the array of disks falls, the system keeps working. You can even
> have hot-pluggable disks, so you can change the disk that is broken without
> rebooting.
>
> You can also have a hot backup using eRServ (Replicate your DB server on a
> backup server, just in case).

In a High Availability situation, there is little choice but to create
some form of "hot backup."  And if you can't afford that, then reality
is that you can't afford to pretend to have "High Availability."

>> 100TB sounds like a lot of backup media and time... Not to mention
>> ensuring that the backups will work with available and functioning
>> backup hardware.
>
> I don't know, but there may be backup systems for that amount of
> space. We have just got some 200Gb tape devices, and they are about
> 2 years old. With a 5 tape robot, you have 1TB of backup.

Certainly there are backup systems designed to cope with those sorts
of quantities of data.  With 8 tape drives, and a rack system that
holds 200 cartridges, you not only can store a HUGE pile of data, but
you can push it onto tape about as quickly as you can generate it.

<http://spectralogic.com> discusses how to use their hardware and
software products to do terabytes of backups in an hour.  They sell a
software product called "Alexandria" that knows how to (at least
somewhat) intelligently backup SAP R/3, Oracle, Informix, and Sybase
systems.  (When I was at American Airlines, that was the software in
use._

Generally, this involves having a bunch of tape drives that are
simultaneously streaming different parts of the backup.

When it's Oracle that's in use, a common strategy involves
periodically doing a "hot" backup (so you can quickly get back to a
known database state), and then having a robot tape drive assigned to
regularly push archive logs to tape as they are produced.

That would more or less resemble taking a "consistent filesystem
backup" of a PG database, and then saving the sequence of WAL files.
(The disanalogies are considerable; that should improve at least a
_little_ once PITR comes along for PostgreSQL...)

None of this is particularly cheap or easy; need I remind gentle
readers that if you can't afford that, then you essentially can't
afford to claim "High Availability?"
--
select 'cbbrowne' || '@' || 'cbbrowne.com';
http://www.ntlug.org/~cbbrowne/nonrdbms.html
Who's afraid of ARPA?

pgsql-general by date:

Previous
From: Christopher Browne
Date:
Subject: Re: case-insensitive database
Next
From: "Relaxin"
Date:
Subject: case-insensitive database