On Monday, November 19, 2012 6:05 AM Jeff Janes wrote:
On Mon, Oct 22, 2012 at 10:51 AM, Amit kapila <amit.kapila@huawei.com> wrote:
>
>> Today again I have again collected the data for configuration Shared_buffers = 7G along with vmstat.
>> The data and vmstat information (bi) are attached with this mail. It is observed from vmstat info that I/O is
happeningfor both cases, however after running for
>> long time, the I/O is also comparatively less with new patch.
>What I see in the vmstat report is that it takes 5.5 "runs" to get
>really good and warmed up, and so it crawls for the first 5.5
>benchmarks and then flies for the last 0.5 benchmark. The way you
>have your runs ordered, that last 0.5 of a benchmark is for the
>patched version, and this drives up the average tps for the patched
>case.
> Also, there is no theoretical reason to think that your patch would
> decrease the amount of IO needed (in fact, by invalidating buffers
> early, it could be expected to increase the amount of IO). So this
> also argues that the increase in performance is caused by the decrease
> in IO, but the patch isn't causing that decrease, it merely benefits
> from it due to an accident of timing.
Today, I have ran in the opposite order, still I see for some readings the similar observation.
I am also not sure of IO part, just based on data I was trying to interpret that way. However
may be for some particular scenario, due to OS buffer management it behaves that way.
As I am not aware of OS buffer management algorithm, so it's difficult to say that such a change would have any impact
onOS buffer management
which can yield better performance.
With Regards,
Amit Kapila.
With Regards,
Amit Kapila.