Re: Compression of full-page-writes - Mailing list pgsql-hackers

From Andres Freund
Subject Re: Compression of full-page-writes
Date
Msg-id 20141208192152.GB24437@alap3.anarazel.de
Whole thread Raw
In response to Re: Compression of full-page-writes  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: Compression of full-page-writes  (Robert Haas <robertmhaas@gmail.com>)
Re: Compression of full-page-writes  (Heikki Linnakangas <hlinnakangas@vmware.com>)
Re: Compression of full-page-writes  (Simon Riggs <simon@2ndQuadrant.com>)
List pgsql-hackers
On 2014-12-08 14:09:19 -0500, Robert Haas wrote:
> > records, just fpis. There is no evidence that we even want to compress
> > other record types, nor that our compression mechanism is effective at
> > doing so. Simple => keep name as compress_full_page_writes
> 
> Quite right.

I don't really agree with this. There's lots of records which can be
quite big where compression could help a fair bit. Most prominently
HEAP2_MULTI_INSERT + INIT_PAGE. During initial COPY that's the biggest
chunk of WAL. And these are big and repetitive enough that compression
is very likely to be beneficial.

I still think that just compressing the whole record if it's above a
certain size is going to be better than compressing individual
parts. Michael argued thta that'd be complicated because of the varying
size of the required 'scratch space'. I don't buy that argument
though. It's easy enough to simply compress all the data in some fixed
chunk size. I.e. always compress 64kb in one go. If there's more
compress that independently.

Greetings,

Andres Freund

-- Andres Freund                       http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training &
Services



pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: Compression of full-page-writes
Next
From: Josh Berkus
Date:
Subject: Re: On partitioning