Full_page_compress is not intended to use with PITR slave, but for the
case to keep both online backup and archive log for archive recovery,
which is very popular PostgreSQL operation now.
I've just posted my evaluation for the patch as a reply for another
thread of the same proposal (sorry, I created new thread because old one
seemed not good).
It compares log compression with gzip case. Also, our proposal can
combine with gzip. It's overall overhead is slightly less than just
copying WAL using cat. As a result, my proposal does not include
serious overhead.
Please refer to the thread "Archive log compression keeping physical log
available in the crash recovery". I appreciate further opinion/comment
on this. I'd like to have more suggestion which evaluation is useful.
I've posted two (archive and restore) commands and a small patch.
These two commands can be treated as contrib and the patch itself does
work if WAL is simply copied to the archive directory.
Regards;
Koichi Suzuki
Tom Lane wrote:
> Koichi Suzuki <suzuki.koichi@oss.ntt.co.jp> writes:
>> Tom Lane wrote:
>>> Doesn't this break crash recovery on PITR slaves?
>
>> Compressed archive log contains the same data as full_page_writes off
>> case. So the influence to PITR slaves is the same as full_page_writes off.
>
> Right. So what is the use-case for running your primary database with
> full_page_writes on and the slaves with it off? It doesn't seem like
> a very sensible combination to me.
>
> Also, it seems to me that some significant performance hit would be
> taken by having to grovel through the log files to remove and re-add the
> full-page data. Plus you are actually writing *more* WAL data out of
> the primary, not less, because you have to save both the full-page
> images and the per-tuple data they normally replace. Do you have
> numbers showing that there's actually any meaningful savings overall?
>
> regards, tom lane
>
--
Koichi Suzuki