Re: alternative compression algorithms? - Mailing list pgsql-hackers

From Tomas Vondra
Subject Re: alternative compression algorithms?
Date
Msg-id 5541614A.5030208@2ndquadrant.com
Whole thread Raw
In response to Re: alternative compression algorithms?  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: alternative compression algorithms?
List pgsql-hackers
Hi,

On 04/29/15 23:54, Robert Haas wrote:
> On Mon, Apr 20, 2015 at 9:03 AM, Tomas Vondra
> <tomas.vondra@2ndquadrant.com> wrote:
>> Sure, it's not an ultimate solution, but it might help a bit. I do have
>> other ideas how to optimize this, but in the planner every milisecond
>> counts. Looking at 'perf top' and seeing pglz_decompress() in top 3.
>
> I suggested years ago that we should not compress data in
> pg_statistic.  Tom shot that down, but I don't understand why.  It
> seems to me that when we know data is extremely frequently accessed,
> storing it uncompressed makes sense.

I'm not convinced not compressing the data is a good idea - it suspect 
it would only move the time to TOAST, increase memory pressure (in 
general and in shared buffers). But I think that using a more efficient 
compression algorithm would help a lot.

For example, when profiling the multivariate stats patch (with multiple 
quite large histograms), the pglz_decompress is #1 in the profile, 
occupying more than 30% of the time. After replacing it with the lz4, 
the data are bit larger, but it drops to ~0.25% in the profile and 
planning the drops proportionally.

It's not a silver bullet, but it would help a lot in those cases.


--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: alternative compression algorithms?
Next
From: Petr Jelinek
Date:
Subject: Re: mogrify and indent features for jsonb