On Fri, Aug 7, 2009 at 11:29 AM, Sam Mason<sam@samason.me.uk> wrote:
> When you choose a compression algorithm you know how much space a worst
> case compression will take (i.e. lzo takes up to 8% more for a 4kB block
> size). This space should be reserved in case of situations like the
> above and the filesystem shouldn't over-commit on this.
>
> Never had to think about this before though so I'm probably missing
> something obvious.
Well most users want compression for the space savings. So running out
of space sooner than without compression when most of the space is
actually unused would disappoint them.
Also, I'm puzzled why it would the space increase would proportional
to the amount of data and be more than 300 bytes. There's no reason it
wouldn't be a small fixed amount. The ideal is you set aside one bit
-- if the bit is set the rest is compressed and has to save at least
one bit. If the bit is not set then the rest is uncompressed. Maximum
bloat is 1-bit. In real systems it's more likely to be a byte or a
word.
--
greg
http://mit.edu/~gsstark/resume.pdf