On Thu, Aug 16, 2012 at 9:30 AM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
> Jeff Janes <jeff.janes@gmail.com> wrote:
>
>> So a server that is completely free of
>> user activity will still generate an endless stream of WAL files,
>> averaging one file per max(archive_timeout, checkpoint_timeout).
>> That comes out to one 16MB file per hour (since it is not possible
>> to set checkpoint_timeout > 1h) which seems a bit much when
>> absolutely no user-data changes are occurring.
>
...
>
> BTW, that's also why I wrote the pg_clearxlogtail utility (source
> code on pgfoundry). We pipe our archives through that and gzip
> which changes this to an endless stream of 16KB files. Those three
> orders of magnitude can make all the difference. :-)
Thanks. Do you put the clearxlogtail and the gzip into the
archive_command, or just do a simple copy into the archive and then
have a cron job do the processing in the archives later? I'm not
really sure what the failure modes are for having pipelines built into
the archive_command.
Thanks,
Jeff