Thanks, but it's already on a RAID array with a battery backed controller
and a journaled FS. The deal is that I don't really want to spend the money
on expanding that storage for data that isn't very critical at all. I want
to stick these blobs on a cheap bunch of ATA disks basically, as comparing
the price of a terabyte of ATA mirrored disks and the same TB on SCSI
hardware raid is enlightening.
M
-----Original Message-----
From: Bradley Kieser [mailto:brad@kieser.net]
Sent: 06 May 2004 11:03
To: Matt Clark
Cc: pgsql-admin@postgresql.org; emberson@phc.net
Subject: Re: [ADMIN] Postgres & large objects
Matt,
Not really the answer that you are looking for and you may already do
this, but if it's a disk space or performance issue then I would suggest
moving the PGDATA dir (or the location if you are using locations) onto
a RAID5 disk array - means you can ramp up the space and you get the
performance gains of RAID5, not to mention the safety of a FS that
recovers from disk failure!
Brad
Matt Clark wrote:
> Hello all,
>
> It seems I'm trying to solve the same problem as Richard Emberson had
> a while ago (thread here:
> http://archives.postgresql.org/pgsql-general/2002-03/msg01199.php).
>
> Essentially I am storing a large number of large objects in the DB
> (potentially tens or hundreds of gigs), and would like the
> pg_largeobject table to be stored on a separate FS. But of course
> it's not just one file to symlink and then forget about, it's a number
> of files that get created.
>
> So, has anyone come up with a way to get the files for a table created
> in a particular place? I know that tablespsaces aren't done yet, but
> a kludge will do (or a patch come to that - we're runing redhat's
> 7.2.3 RPMs, but could switch if necessary). I had thought that if the
> filenames were predictable it might be possible to precreate a bunch
> of zero-length files and symlink them in advance...
>
> Cheers
>
> Matt
>
>