Re: Table and Index compression - Mailing list pgsql-hackers

From Greg Stark
Subject Re: Table and Index compression
Date
Msg-id 407d949e0908070233r51cf352dxb28245262d329685@mail.gmail.com
Whole thread Raw
In response to Re: Table and Index compression  (Pierre Frédéric Caillaud<lists@peufeu.com>)
Responses Re: Table and Index compression  (Sam Mason <sam@samason.me.uk>)
List pgsql-hackers
2009/8/7 Pierre Frédéric Caillaud <lists@peufeu.com>:
>
> Also, about compressed NTFS : it can give you disk-full errors on read().

I suspect it's unavoidable for similar reasons to the problems
Postgres faces. When you issue a read() you have to find space in the
filesystem cache to hold the data. Some other data has to be evicted.
If that data doesn't compress as well as it did previously it could
take more space and cause the disk to become full.

This also implies that fsync() could generate that error...

> Back to the point of how to handle disk full errors :
> - we could write a file the size of shared_buffers at startup
> - if a write() reports disk full, delete the file above
> - we now have enough space to flush all of shared_buffers
> - flush and exit gracefully

Unfortunately that doesn't really help. That only addresses the issue
for a single backend (or as many as are actually running when the
error starts). The next connection could read in new data and expand
that and now you have no slop space.

Put another way, we don't want to exit at all, gacefully or not. We
want to throw an error, abort the transaction (or subtransaction) and
keep going.


--
greg
http://mit.edu/~gsstark/resume.pdf


pgsql-hackers by date:

Previous
From: Muhammad Aqeel
Date:
Subject: Patch to remove inconsistency in dependency.c
Next
From: Boszormenyi Zoltan
Date:
Subject: Re: ECPG support for struct in INTO list