On 06/07/2013 05:38 PM, Andres Freund wrote:
> On 2013-06-07 17:27:28 +0200, Hannu Krosing wrote:
>> On 06/07/2013 04:54 PM, Andres Freund wrote:
>>> I mean, we don't necessarily need to make it configurable if we just add
>>> one canonical new "better" compression format. I am not sure that's
>>> sufficient since I can see usecases for 'very fast but not too well
>>> compressed' and 'very well compressed but slow', but I am personally not
>>> really interested in the second case, so ...
>> As DE-comression is often still fast for slow-but-good compression,
>> the obvious use case for 2nd is read-mostly data
> Well. Those algorithms still are up to magnitude or so slower
> decompressing than something like snappy, lz4 or even pglz while the
> compression ratio usually is only like 50-80% improved... So you really
> need a good bit of compressible data (so the amount of storage actually
> hurts) that you don't read all that often (since you then would
> bottleneck on compression too often).
> That's just not something I run across to regularly.
While the difference in compression speeds between algorithms is
different, it may be more then offset in favour of better compression
if there is real I/O involved as exemplified here:
http://www.citusdata.com/blog/64-zfs-compression
--
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ