>From: Tsunakawa, Takayuki [mailto:tsunakawa.takay@jp.fujitsu.com]
>> [Size=800, iter=1,000,000]
>> Master |15.763
>> Patched|16.262 (+3%)
>>
>> [Size=32768, iter=1,000,000]
>> Master |61.3076
>> Patched|62.9566 (+2%)
>
>What's the unit, second or millisecond?
Millisecond.
>Why is the number of digits to the right of the decimal point?
>
>Is the measurement correct? I'm wondering because the difference is larger in the
>latter case. Isn't the accounting processing almost the same in both cases?
>* former: 16.262 - 15.763 = 4.99
>* latter: 62.956 - 61.307 = 16.49
>I think the overhead is sufficiently small. It may get even smaller with a trivial tweak.
>
>You added the new member usedspace at the end of MemoryContextData. The
>original size of MemoryContextData is 72 bytes, and Intel Xeon's cache line is 64 bytes.
>So, the new member will be on a separate cache line. Try putting usedspace before
>the name member.
OK. I changed the order of MemoryContextData members to fit usedspace into one cacheline.
I disabled all the catcache eviction mechanism in patched one and compared it with master
to investigate that overhead of memory accounting become small enough.
The settings are almost same as the last email.
But last time the number of trials was 50 so I increased it and tried 5000 times to
calculate the average figure (rounded off to three decimal place).
[Size=800, iter=1,000,000]
Master |15.64 ms
Patched|16.26 ms (+4%)
The difference is 0.62ms
[Size=32768, iter=1,000,000]
Master |61.39 ms
Patched|60.99 ms (-1%)
I guess there is around 2% noise.
But based on this experiment it seems the overhead small.
Still there is some overhead but it can be covered by some other
manipulation like malloc().
Does this result show that hard-limit size option with memory accounting
doesn't harm to usual users who disable hard limit size option?
Regards,
Takeshi Ideriha