Re: Compression of full-page-writes - Mailing list pgsql-hackers

From Satoshi Nagayasu
Subject Re: Compression of full-page-writes
Date
Msg-id 52200C81.4000108@uptime.jp
Whole thread Raw
In response to Compression of full-page-writes  (Fujii Masao <masao.fujii@gmail.com>)
Responses Re: Compression of full-page-writes  (Satoshi Nagayasu <snaga@uptime.jp>)
List pgsql-hackers

(2013/08/30 11:55), Fujii Masao wrote:
> Hi,
>
> Attached patch adds new GUC parameter 'compress_backup_block'.
> When this parameter is enabled, the server just compresses FPW
> (full-page-writes) in WAL by using pglz_compress() before inserting it
> to the WAL buffers. Then, the compressed FPW is decompressed
> in recovery. This is very simple patch.
>
> The purpose of this patch is the reduction of WAL size.
> Under heavy write load, the server needs to write a large amount of
> WAL and this is likely to be a bottleneck. What's the worse is,
> in replication, a large amount of WAL would have harmful effect on
> not only WAL writing in the master, but also WAL streaming and
> WAL writing in the standby. Also we would need to spend more
> money on the storage to store such a large data.
> I'd like to alleviate such harmful situations by reducing WAL size.
>
> My idea is very simple, just compress FPW because FPW is
> a big part of WAL. I used pglz_compress() as a compression method,
> but you might think that other method is better. We can add
> something like FPW-compression-hook for that later. The patch
> adds new GUC parameter, but I'm thinking to merge it to full_page_writes
> parameter to avoid increasing the number of GUC. That is,
> I'm thinking to change full_page_writes so that it can accept new value
> 'compress'.
>
> I measured how much WAL this patch can reduce, by using pgbench.
>
> * Server spec
>    CPU: 8core, Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz
>    Mem: 16GB
>    Disk: 500GB SSD Samsung 840
>
> * Benchmark
>    pgbench -c 32 -j 4 -T 900 -M prepared
>    scaling factor: 100
>
>    checkpoint_segments = 1024
>    checkpoint_timeout = 5min
>    (every checkpoint during benchmark were triggered by checkpoint_timeout)

I believe that the amount of backup blocks in WAL files is affected
by how often the checkpoints are occurring, particularly under such
update-intensive workload.

Under your configuration, checkpoint should occur so often.
So, you need to change checkpoint_timeout larger in order to
determine whether the patch is realistic.

Regards,

>
> * Result
>    [tps]
>    1386.8 (compress_backup_block = off)
>    1627.7 (compress_backup_block = on)
>
>    [the amount of WAL generated during running pgbench]
>    4302 MB (compress_backup_block = off)
>    1521 MB (compress_backup_block = on)
>
> At least in my test, the patch could reduce the WAL size to one-third!
>
> The patch is WIP yet. But I'd like to hear the opinions about this idea
> before completing it, and then add the patch to next CF if okay.
>
> Regards,
>
>
>
>

-- 
Satoshi Nagayasu <snaga@uptime.jp>
Uptime Technologies, LLC. http://www.uptime.jp



pgsql-hackers by date:

Previous
From: Fujii Masao
Date:
Subject: Compression of full-page-writes
Next
From: Satoshi Nagayasu
Date:
Subject: Re: Compression of full-page-writes