On Mon, 2003-09-15 at 14:40, Lamar Owen wrote:
> Joshua D. Drake wrote:
> > It is alot but is is not a lot for something like an Insurance company
> > or a bank. Also 100TB is probably non-compressed although 30TB is still
> > large.
>
> Our requirements are such that this figure is our best guess after
> compression. The amount of data prior to compression is much larger,
> and consists of highly compressible astronomical observations in FITS
> format.
Just MHO, but I'd think about keeping the images outside of the
database (or in a separate database), since pg_dump is single-
threaded, and thus 1 CPU will be hammered trying to compress the
FITS files, while the other CPU(s) sit idle.
Of course, you could compress the images on the front end, saving
disk space and do uncompressed pg_dumps. The pg_dump would be IO
bound, then. But I'm sure you thought of that already...
The images would have to be uncompressed at view time, but that
could happen on the client, thus saving bandwidth, and distributing
CPU needs.
http://h18006.www1.hp.com/products/storageworks/esl9000/index.html
This box is pretty spiffy: "up to 119 TB of native capacity",
"Multi-unit scalability supporting up to 64 drives and 2278
cartridges".
Too bad it doesn't mention Linux.
--
-----------------------------------------------------------------
Ron Johnson, Jr. ron.l.johnson@cox.net
Jefferson, LA USA
Great Inventors of our time:
Al Gore -> Internet
Sun Microsystems -> Clusters