Is postgresql the only thing using the disk space/partition?
Have you considered running a cron job to parse df output to "trigger" a
delete when disk usage gets to a set threshold? and thus also account
for any unexpected non-postgresql disk usage.
I would also think you would want to consider the size of the old stored
data when deciding how many records to delete.
> To give you an idea of the figures we are talking about: Say we have
> a 250 GB disk. Normally we would use about 4-8 GB of database.
Given that you normally have 4-8GB of data and you have trouble when a
fault/error causes an excess of 200GB I would also think about
triggering a stop recording under those conditions. If it takes 200GB of
data to automate a data purging then the purging of *all* old records is
going to give you a short time of extra space unless you start purging
the beginning of the current erroneous recording.
I am thinking that a cron job that will email/page/sms you when it hits
50% disk usage would be a better solution that would simply give you a
heads up to find and fix the fault causing the excess usage.
--
Shane Ambler
pgSQL (at) Sheeky (dot) Biz
Get Sheeky @ http://Sheeky.Biz