vaccuming very large table problem - Mailing list pgsql-admin

From if
Subject vaccuming very large table problem
Date
Msg-id f37dba8b0802150256x4bfe638duc921ef686b623e04@mail.gmail.com
Whole thread Raw
Responses Re: vaccuming very large table problem  (Decibel! <decibel@decibel.org>)
List pgsql-admin
Hello list!

We use postgresql as a backend to our email gateway, and keep al
emails for in database. Using postgres version 7.4.8 (yes, i know it's
old), and rather specific table schema (the application was desined
that way) -- all emails split into 2kb parts and fed up into
pg_largeobject. So, long story short, i now have a catch-22 situation
-- database using about 0.7TB and we are running out of space ;-)
I can delete some old stuff but i cannot run full vacuum to reclaim
disk space (i takes way more than full weekend) and i also cannot
dump/restore as there's no free space (2x database)

So, with this restrictions aplied, i figured out that i can somehow
zero out all old entries in pg_largeobject or even physically delete
these files, and rebuild all neccesary indexes.

What is the best way to do this?
IMO, dd'ing /dev/zero to this files will cause postgres to
reinitialize these empty blocks, and after this will still need to
vacuum full over 0.7TB, am i right?
And if i delete them, then start postmaster, there'll be lots of
complaining but will the latest data be saved?

How can i delete, for instance, first 70% of data reasonably fast?

P.S.  Please cc me, as i'm not subscribed yet.
Thanks in advance!

regards,
if

pgsql-admin by date:

Previous
From: "libra dba"
Date:
Subject: Failover of the Primary database and starting the standby database in Postgresql in PITR configuraiton?
Next
From: "Vladimir Rusinov"
Date:
Subject: Re: WAL backups