On 30 August 2013 04:55, Fujii Masao <masao.fujii@gmail.com> wrote:
> My idea is very simple, just compress FPW because FPW is
> a big part of WAL. I used pglz_compress() as a compression method,
> but you might think that other method is better. We can add
> something like FPW-compression-hook for that later. The patch
> adds new GUC parameter, but I'm thinking to merge it to full_page_writes
> parameter to avoid increasing the number of GUC. That is,
> I'm thinking to change full_page_writes so that it can accept new value
> 'compress'.
> * Result
> [tps]
> 1386.8 (compress_backup_block = off)
> 1627.7 (compress_backup_block = on)
>
> [the amount of WAL generated during running pgbench]
> 4302 MB (compress_backup_block = off)
> 1521 MB (compress_backup_block = on)
Compressing FPWs definitely makes sense for bulk actions.
I'm worried that the loss of performance occurs by greatly elongating
transaction response times immediately after a checkpoint, which were
already a problem. I'd be interested to look at the response time
curves there.
Maybe it makes sense to compress FPWs if we do, say, > N FPW writes in
a transaction. Just ideas.
I was thinking about this and about our previous thoughts about double
buffering. FPWs are made in foreground, so will always slow down
transaction rates. If we could move to double buffering we could avoid
FPWs altogether. Thoughts?
-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services