Re: Compression of full-page-writes - Mailing list pgsql-hackers

From Heikki Linnakangas
Subject Re: Compression of full-page-writes
Date
Msg-id 54860B1B.2030401@vmware.com
Whole thread Raw
In response to Re: Compression of full-page-writes  (Andres Freund <andres@2ndquadrant.com>)
Responses Re: Compression of full-page-writes  (Michael Paquier <michael.paquier@gmail.com>)
List pgsql-hackers
On 12/08/2014 09:21 PM, Andres Freund wrote:
> I still think that just compressing the whole record if it's above a
> certain size is going to be better than compressing individual
> parts. Michael argued thta that'd be complicated because of the varying
> size of the required 'scratch space'. I don't buy that argument
> though. It's easy enough to simply compress all the data in some fixed
> chunk size. I.e. always compress 64kb in one go. If there's more
> compress that independently.

Doing it in fixed-size chunks doesn't help - you have to hold onto the 
compressed data until it's written to the WAL buffers.

But you could just allocate a "large enough" scratch buffer, and give up 
if it doesn't fit. If the compressed data doesn't fit in e.g. 3 * 8kb, 
it didn't compress very well, so there's probably no point in 
compressing it anyway. Now, an exception to that might be a record that 
contains something else than page data, like a commit record with 
millions of subxids, but I think we could live with not compressing 
those, even though it would be beneficial to do so.

- Heikki




pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: On partitioning
Next
From: Jim Nasby
Date:
Subject: Re: [v9.5] Custom Plan API