On Tue, Oct 23, 2012 at 08:21:54PM -0400, Noah Misch wrote:
> -Patch- -tps@-c1- -tps@-c2- -tps@-c8- -WAL@-c8-
> HEAD,-F80 816 1644 6528 1821 MiB
> xlogscale,-F80 824 1643 6551 1826 MiB
> xlogscale+lz,-F80 717 1466 5924 1137 MiB
> xlogscale+lz,-F100 753 1508 5948 1548 MiB
>
> Those are short runs with no averaging of multiple iterations; don't put too
> much faith in the absolute numbers.
I decided to rerun those measurements with three 15-minute runs. I removed
the -F100 test and added wal_update_changes_v2.patch (delta encoding version)
to the mix. Median results:
-Patch- -tps@-c1- -tps@-c2- -tps@-c8- -WAL@-c8-
HEAD,-F80 832 1679 6797 44 GiB
scale,-F80 830 1679 6798 44 GiB
scale+lz,-F80 736 1498 6169 11 GiB
scale+delta,-F80 841 1713 7056 10 GiB
The numbers varied little across runs. So we see the same general trends as
with the short runs; overall performance is slightly higher across the board,
and the fraction of WAL avoided is much higher. I'm suspecting the patches
shrink WAL better in these longer runs because the WAL of a short run contains
a higher density of full-page images.
From these results, I think that the LZ approach is something we could only
provide as an option; CPU-bound workloads may not be our bread and butter, but
we shouldn't dock them 10% with no option to disable. Amit's delta encoding
approach seems to be something we could safely enable across the board.
Naturally, there are other compression and delta encoding schemes. Does
anyone feel the need to explore further alternatives?
We might eventually find the need for multiple, user-selectable, WAL
compression strategies. I don't recommend taking that step yet.
nm