Merlin Moncure <mmoncure@gmail.com> writes:
> On Mon, Jan 7, 2013 at 10:16 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Takeshi Yamamuro <yamamuro.takeshi@lab.ntt.co.jp> writes:
>>> The attached is a patch to improve compression speeds with loss of
>>> compression ratios in backend/utils/adt/pg_lzcompress.c.
>> Why would that be a good tradeoff to make? Larger stored values require
>> more I/O, which is likely to swamp any CPU savings in the compression
>> step. Not to mention that a value once written may be read many times,
>> so the extra I/O cost could be multiplied many times over later on.
> I disagree. pg compression is so awful it's almost never a net win.
> I turn it off.
One report doesn't make it useless, but even if it is so on your data,
why would making it even less effective be a win?
>> Another thing to keep in mind is that the compression area in general
>> is a minefield of patents. We're fairly confident that pg_lzcompress
>> as-is doesn't fall foul of any, but any significant change there would
>> probably require more research.
> A minefield of *expired* patents. Fast lz based compression is used
> all over the place -- for example by the lucene.
The patents that had to be dodged for original LZ compression are gone,
true, but what's your evidence for saying that newer versions don't have
newer patents?
regards, tom lane