Not really answering the question but I thought I would post this anyway
as it may be of interest.
If you want to have some fun (depending on how production-level the
system needs to be) you can build this level of storage using Linux
clusters and cheap IDE drives. No April foo's joke! I have built servers
in TB blocks using cheap IDE drives in RAID 5 configs! You just whack in
one of those 4-way or 8-way cards and the new high capacity drives
(300GB most likely atm although 250GB are massively cheaper). That's 1TB
-> 2TB per IDE slot x 6 plus the 2 on the motherboard. So you are
talking 12TB per server. Rework a 2U chassis and it's rack-em-up time
and go!
There are extender cards, of course, that will allow you to put in more
drives and with SCSI the game changes completely because you can just
chain them together on a single line.
Okay, there are seriously better options than this, of course, and you
probably have used one of them, but this is still fun!
I think as far as PG storage goes you're really on a losing streak here
because PG clustering really isn't going to support this across multiple
servers. We're not even close to the mark as far as clustered servers
and replication management goes, let alone the storate limit of 2GB per
table. So sadly, PG would have to bow out of this IMHO unless someone
else nukes me on this!
Brad
Tony Reina wrote:
>I have a database that will hold massive amounts of scientific data.
>Potentially, some estimates are that we could get into needing
>Petabytes (1,000 Terabytes) of storage.
>
>1. Do off-the-shelf servers exist that will do Petabyte storage?
>
>2. Is it possible for PostgreSQL to segment a database between
>multiple servers? (I was looking at a commercial vendor who had a
>product that took rarely used data in Oracle databases and migrated
>them to another server to keep frequently accessed data more readily
>available.)
>
>Thanks.
>-Tony
>
>---------------------------(end of broadcast)---------------------------
>TIP 4: Don't 'kill -9' the postmaster
>
>
>