Re: Compression of full-page-writes - Mailing list pgsql-hackers

From Fujii Masao
Subject Re: Compression of full-page-writes
Date
Msg-id CAHGQGwFuWdA-Y7OWScBPKM17xDVDtdjBYRqxANLMVNhbPj5g4Q@mail.gmail.com
Whole thread Raw
In response to Re: Compression of full-page-writes  (Amit Kapila <amit.kapila16@gmail.com>)
List pgsql-hackers
On Fri, Aug 30, 2013 at 1:43 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Fri, Aug 30, 2013 at 8:25 AM, Fujii Masao <masao.fujii@gmail.com> wrote:
>> Hi,
>>
>> Attached patch adds new GUC parameter 'compress_backup_block'.
>> When this parameter is enabled, the server just compresses FPW
>> (full-page-writes) in WAL by using pglz_compress() before inserting it
>> to the WAL buffers. Then, the compressed FPW is decompressed
>> in recovery. This is very simple patch.
>>
>> The purpose of this patch is the reduction of WAL size.
>> Under heavy write load, the server needs to write a large amount of
>> WAL and this is likely to be a bottleneck. What's the worse is,
>> in replication, a large amount of WAL would have harmful effect on
>> not only WAL writing in the master, but also WAL streaming and
>> WAL writing in the standby. Also we would need to spend more
>> money on the storage to store such a large data.
>> I'd like to alleviate such harmful situations by reducing WAL size.
>>
>> My idea is very simple, just compress FPW because FPW is
>> a big part of WAL. I used pglz_compress() as a compression method,
>> but you might think that other method is better. We can add
>> something like FPW-compression-hook for that later. The patch
>> adds new GUC parameter, but I'm thinking to merge it to full_page_writes
>> parameter to avoid increasing the number of GUC. That is,
>> I'm thinking to change full_page_writes so that it can accept new value
>> 'compress'.
>>
>> I measured how much WAL this patch can reduce, by using pgbench.
>>
>> * Server spec
>>   CPU: 8core, Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz
>>   Mem: 16GB
>>   Disk: 500GB SSD Samsung 840
>>
>> * Benchmark
>>   pgbench -c 32 -j 4 -T 900 -M prepared
>>   scaling factor: 100
>>
>>   checkpoint_segments = 1024
>>   checkpoint_timeout = 5min
>>   (every checkpoint during benchmark were triggered by checkpoint_timeout)
>>
>> * Result
>>   [tps]
>>   1386.8 (compress_backup_block = off)
>>   1627.7 (compress_backup_block = on)
>>
>>   [the amount of WAL generated during running pgbench]
>>   4302 MB (compress_backup_block = off)
>>   1521 MB (compress_backup_block = on)
>
> This is really nice data.
>
> I think if you want, you can once try with one of the tests Heikki has
> posted for one of my other patch which is here:
> http://www.postgresql.org/message-id/51366323.8070606@vmware.com
>
> Also if possible, for with lesser clients (1,2,4) and may be with more
> frequency of checkpoint.
>
> This is just to show benefits of this idea with other kind of workload.

Yep, I will do more tests.

> I think we can do these tests later as well, I had mentioned because
> sometime back (probably 6 months), one of my colleagues have tried
> exactly the same idea of using compression method (LZ and few others)
> for FPW, but it turned out that even though the WAL size is reduced
> but performance went down which is not the case in the data you have
> shown even though you have used SSD, might be he has done some mistake
> as he was not as experienced, but I think still it's good to check on
> various workloads.

I'd appreciate if you test the patch with HDD. Now I have no machine with HDD.

Regards,

-- 
Fujii Masao



pgsql-hackers by date:

Previous
From: Fujii Masao
Date:
Subject: Re: Compression of full-page-writes
Next
From: wangshuo@highgo.com.cn
Date:
Subject: ENABLE/DISABLE CONSTRAINT NAME