On Thursday, October 25, 2012 5:43 AM, Noah Misch wrote:
On Tue, Oct 23, 2012 at 08:21:54PM -0400, Noah Misch wrote:
>> -Patch- -tps@-c1- -tps@-c2- -tps@-c8- -WAL@-c8-
>> HEAD,-F80 816 1644 6528 1821 MiB
>> xlogscale,-F80 824 1643 6551 1826 MiB
>> xlogscale+lz,-F80 717 1466 5924 1137 MiB
>> xlogscale+lz,-F100 753 1508 5948 1548 MiB
>
>> Those are short runs with no averaging of multiple iterations; don't put too
>> much faith in the absolute numbers.
> I decided to rerun those measurements with three 15-minute runs. I removed
> the -F100 test and added wal_update_changes_v2.patch (delta encoding version)
> to the mix. Median results:
> -Patch- -tps@-c1- -tps@-c2- -tps@-c8- -WAL@-c8-
> HEAD,-F80 832 1679 6797 44 GiB
> scale,-F80 830 1679 6798 44 GiB
> scale+lz,-F80 736 1498 6169 11 GiB
> scale+delta,-F80 841 1713 7056 10 GiB
> The numbers varied little across runs. So we see the same general trends as
> with the short runs; overall performance is slightly higher across the board,
> and the fraction of WAL avoided is much higher. I'm suspecting the patches
> shrink WAL better in these longer runs because the WAL of a short run contains
> a higher density of full-page images.
I have fixed all the review comments raised in delta encoding approach raised by you (for needs toast, for now I have
keptthe code as it will not go in patch of encoding for it. however it can be changed.).
I have also fixed the major comment about this patch by Heikki and Tom [use memcmp to find modified columns].
The patch containing review comments fixed for delta encoding method is attached with this mail.
The readings with modified patch are as below, the detailed configuration used in the file attached:
-Patch- -tps@-c1- -tps@-c2- -tps@-c8-
scale,-F80 834 1451 2701
scale+lz,-F80 659 1276 4650
scale+delta+review_fixed,-F80 873 1704 5509
The results are similar to your findings except for 8 client result.
I have one doubt that my m/c is 4core m/c on which I am taking data whereas yours is 8 core.
So tommorow I shall post the results with 1,2,4,8 clients as well.
Any further suggestions?
With Regards,
Amit Kapila.