Re: Page Checksums + Double Writes - Mailing list pgsql-hackers

From Jesper Krogh
Subject Re: Page Checksums + Double Writes
Date
Msg-id 4EF2F1C2.6080107@krogh.cc
Whole thread Raw
In response to Re: Page Checksums + Double Writes  (Florian Weimer <fweimer@bfk.de>)
Responses Re: Page Checksums + Double Writes
List pgsql-hackers
On 2011-12-22 09:42, Florian Weimer wrote:
> * David Fetter:
>
>> The issue is that double writes needs a checksum to work by itself,
>> and page checksums more broadly work better when there are double
>> writes, obviating the need to have full_page_writes on.
> How desirable is it to disable full_page_writes?  Doesn't it cut down
> recovery time significantly because it avoids read-modify-write cycles
> with a cold cache
What is the downsides of having full_page_writes enabled .. except from
log-volume? The manual mentions something about speed, but it is
a bit unclear where that would come from, since the full pages must
be somewhere in memory when being worked on anyway,.

Anyway, I have an archive_command that looks like:
archive_command = 'test ! -f /data/wal/%f.gz && gzip --fast < %p > 
/data/wal/%f.gz'

It brings on along somewhere between 50 and 75% reduction in log-volume
with "no cost" on the production system (since gzip just occupices one 
of the
many cores on the system) and can easily keep up even during
quite heavy writes.

Recovery is a bit more tricky, because hooking gunzip into the command 
there
will cause the system to replay log, gunzip, read data, replay log cycle 
where the gunzip
easily could be done on the other logfiles while replay are being done 
on one.

So a "straightforward" recovery will cost in recovery time, but that can 
be dealt with.

Jesper
-- 
Jesper


pgsql-hackers by date:

Previous
From: Simon Riggs
Date:
Subject: Re: Page Checksums + Double Writes
Next
From: "Kevin Grittner"
Date:
Subject: Re: Page Checksums + Double Writes