Re: Compression of full-page-writes - Mailing list pgsql-hackers

From Andres Freund
Subject Re: Compression of full-page-writes
Date
Msg-id 20141230122744.GC27028@alap3.anarazel.de
Whole thread Raw
In response to Re: Compression of full-page-writes  (Michael Paquier <michael.paquier@gmail.com>)
Responses Re: Compression of full-page-writes  (Bruce Momjian <bruce@momjian.us>)
List pgsql-hackers
On 2014-12-30 21:23:38 +0900, Michael Paquier wrote:
> On Tue, Dec 30, 2014 at 6:21 PM, Jeff Davis <pgsql@j-davis.com> wrote:
> > On Fri, 2013-08-30 at 09:57 +0300, Heikki Linnakangas wrote:
> >> Speeding up the CRC calculation obviously won't help with the WAL volume
> >> per se, ie. you still generate the same amount of WAL that needs to be
> >> shipped in replication. But then again, if all you want to do is to
> >> reduce the volume, you could just compress the whole WAL stream.
> >
> > Was this point addressed?
> Compressing the whole record is interesting for multi-insert records,
> but as we need to keep the compressed data in a pre-allocated buffer
> until WAL is written, we can only compress things within a given size
> range. The point is, even if we define a  lower bound, compression is
> going to perform badly with an application that generates for example
> many small records that are just higher than the lower bound...
> Unsurprisingly for small records this was bad:

So why are you bringing it up? That's not an argument for anything,
except not doing it in such a simplistic way.

Greetings,

Andres Freund

-- Andres Freund                       http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training &
Services



pgsql-hackers by date:

Previous
From: Michael Paquier
Date:
Subject: Re: Compression of full-page-writes
Next
From: Michael Paquier
Date:
Subject: Re: Patch: add recovery_timeout option to control timeout of restore_command nonzero status code