On Tue, Feb 4, 2014 at 11:58 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Tue, Feb 4, 2014 at 12:39 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> Now there is approximately 1.4~5% CPU gain for
>> "hundred tiny fields, half nulled" case
> Assuming that the logic isn't buggy, a point in need of further study,
> I'm starting to feel like we want to have this. And I might even be
> tempted to remove the table-level off switch.
I have tried to stress on worst case more, as you are thinking to
remove table-level switch and found that even if we increase the
data by approx. 8 times ("ten long fields, all changed", each field contains
80 byte data), the CPU overhead is still < 5% which clearly shows that
the overhead doesn't increase much even if the length of unmatched data
is increased by much larger factor.
So the data for worst case adds more weight to your statement
("remove table-level switch"), however there is no harm in keeping
table-level option with default as 'true' and if some users are really sure
the updates in their system will have nothing in common, then they can
make this new option as 'false'.
Below is data for the new case " ten long fields, all changed" added
in attached script file:
Unpatched
testname | wal_generated | duration
------------------------------+---------------+------------------
ten long fields, all changed | 3473999520 | 45.0375978946686
ten long fields, all changed | 3473999864 | 45.2536928653717
ten long fields, all changed | 3474006880 | 45.1887288093567
After pgrb_delta_encoding_v8.patch
----------------------------------------------------------
testname | wal_generated | duration
------------------------------+---------------+------------------
ten long fields, all changed | 3474006456 | 47.5744359493256
ten long fields, all changed | 3474000136 | 47.3830440044403
ten long fields, all changed | 3474002688 | 46.9923310279846
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com