Re: Curiosity: what is PostgreSQL doing with data when "nothing" is happening? - Mailing list pgsql-novice

From Gavan Schneider
Subject Re: Curiosity: what is PostgreSQL doing with data when "nothing" is happening?
Date
Msg-id 22003-1353974245-469259@sneakemail.com
Whole thread Raw
In response to Re: Curiosity: what is PostgreSQL doing with data when "nothing" is happening?  ("Kevin Grittner" <kgrittn@mail.com>)
List pgsql-novice
On Monday, November 26, 2012 at 00:15, Kevin Grittner wrote:

>Impressive that bzip2 does two orders of magnitude better than gzip
>with this. ...

bzip2 has a few bytes of overhead for each additional large
block of the original file so some/all of the difference may
only reflect my 8Mb vs the usual 16 Mb WAL files. And this is
not a real saving until the results are put into a tarball since
each tiny/not-as-tiny file still consumes an inode and disk segment.

The differences that matter are the part filled WAL files with
real data... I didn't test, rather I'm using my experience that
bzip2 distribution archive files are always smaller than their
gzip alternatives. And bzip2 was available.

>Gavan Schneider:
>>Would the universe as we know it be upset if there was a
>>postgresql.conf option such as:
>>
>> archive_zero_fill_on_segment_switch = on|off # default off
>>
>>This would achieve the idle compression result much more
>>elegantly (I know, it's still a hack) for those who have the
>>need, without, as far as I can tell, breaking anything else.
>
>The problem is that this would involve 16MB of writes to the OS for
>every WAL-file switch, which might even result in that much actual
>disk activity in many environments. The filter I wrote doesn't add
>any disk I/O ...
>
Point taken.


More musings...

Maybe an optimisation that could work for many is if the initial
block of the WAL file carried the information as to how much of
the WAL file has useful data. And short (i.e., valid data only)
WAL files were acceptable to postgres on restore/replication.
(Obviously this is not for the cluster's pool of
working/overwritten WAL files.)

During normal operations where the WAL file is being overwritten
the proposed flag in the initial file segment would be set to
zero to indicate all page segments have to be checked on replay
(i.e., existing behavior in crash recovery), but at WAL
switchover the first file segment gets updated with the proposed
flag set to specify the extent of valid data that follows. Only
the indicated data is restored/replicated when the file is read
by postgres. Tools such as pg_clearxlogtail would then only need
to inspect the first part of the WAL file, calculate the
end_of_valid_data offset, and copy exactly that much to output.
This would save reading/checking the rest of the WAL file and
outputting the padding zero's.

The big world people could benefit since smaller update files
(esp. if compressed) can move around a network a lot faster in
replication environments.

AFAIK the downside would be one extra disc write per WAL
changeover. The worst case scenario is only the status quo,
i.e., the whole WAL file has to be processed since it has been
filled with data. Note in this case you may as well leave the
proposed flag in the first file segment as zero (no extra write
needed) since this correctly indicates the whole file has to be processed.

The upside, at least for those who need to rollout incomplete
WAL files to satisfy timing needs, is they could work with files
that are only as big as they need to be.

And a postgresql.conf switch could isolate the cost of that
minor extra file update per WAL changeover to those who can
benefit from it.

Worth any further thoughts?

Regards
Gavan Schneider



pgsql-novice by date:

Previous
From: Ennio-Sr
Date:
Subject: Re: Postgresql-8.4: File System Level Backup (& recovery failure)
Next
From: Michael Kolomeitsev
Date:
Subject: Nested composite types again