Re: [WIP PATCH] for Performance Improvement in Buffer Management - Mailing list pgsql-hackers

From Amit kapila
Subject Re: [WIP PATCH] for Performance Improvement in Buffer Management
Date
Msg-id 6C0B27F7206C9E4CA54AE035729E9C382853BE5D@szxeml509-mbs
Whole thread Raw
In response to Re: [WIP PATCH] for Performance Improvement in Buffer Management  (Jeff Janes <jeff.janes@gmail.com>)
Responses Re: [WIP PATCH] for Performance Improvement in Buffer Management
List pgsql-hackers
On Friday, October 19, 2012 9:15 PM Jeff Janes wrote:
On Tue, Sep 4, 2012 at 6:25 AM, Amit kapila <amit.kapila@huawei.com> wrote:
> On Tuesday, September 04, 2012 12:42 AM Jeff Janes wrote:
> On Mon, Sep 3, 2012 at 7:15 AM, Amit kapila <amit.kapila@huawei.com> wrote:
>>> This patch is based on below Todo Item:
>>
>>> Consider adding buffers the background writer finds reusable to the free
>>> list
>>
>>
>>
>>> I have tried implementing it and taken the readings for Select when all the
>>> data is in either OS buffers
>>
>>> or Shared Buffers.
>>
>>
>>
>> As I understood and anlyzed based on above, that there is problem in attached patch such that in function
>> InvalidateBuffer(), after UnlockBufHdr() and before PartitionLock if some backend uses that buffer and increase the
usagecount to 1, still 
>> InvalidateBuffer() will remove the buffer from hash table and put it in Freelist.
>> I have modified the code to address above by checking refcount & usage_count  inside Partition Lock
>> , LockBufHdr and only after that move it to freelist which is similar to InvalidateBuffer.
>> In actual code we can optimize the current code by using extra parameter in InvalidateBuffer.
>
>> Please let me know if I understood you correctly or you want to say something else by above comment?

> Yes, I think that this is part of the risk I was hinting at.  I
> haven't evaluated your fix to it.  But assuming it is now safe, I
> still think it is a bad idea to invalidate a perfectly good buffer.
> Now a process that wants that page will have to read it in again, even
> though it is still sitting there.  This is particularly bad because
> the background writer is coded to always circle the buffer pool every
> 2 minutes, whether that many clean buffers are needed or not.  I think
> that that is a bad idea, but having it invalidate buffers as it goes
> is even worse.

That is true, but is it not the case of low activity, and in general BGwriter takes into account how many buffers
allocedand clock swipe completed passes to make sure it cleans the buffers appropriately. 
One more doubt I have is whether this behavior (circle the buffer pool every 2 minutes) can't be controlled by
'bgwriter_lru_maxpages'as this number can dictate how much buffers to clean in each cycle. 

> I think the code for the free-list linked list is written so that it
> performs correctly for a valid buffer to be on the freelist, even
> though that does not happen under current implementations.

> If you
> find that a buffer on the freelist has become pinned, used, or dirty
> since it was added (which can only happen if it is still valid), you
> just remove it and try again.

Is it  actually possible in any usecase, that buffer mgmt algorithm can find any buffer on freelist which is pinned or
isdirty? 

>
>> Also, do we want to actually invalidate the buffers?  If someone does
>> happen to want one after it is put on the freelist, making it read it
>> in again into a different buffer doesn't seem like a nice thing to do,
>> rather than just letting it reclaim it.
>
> But even if bgwriter/checkpoint don't do, Backend needing new buffer will do similar things (remove from hash table)
forthis buffer as this is nextvictim buffer. 

> Right, but only if it is the nextvictim, here we do it if it is
> nextvictim+N, for some largish values of N.  (And due to the 2 minutes
> rule, sometimes for very large values of N)

Can't we control this 2 minutes rule using new or existing GUC, is there any harm in that as you pointed out earlier
alsoin mail chain that it is not good. 
Because such a parameter can make the flushing by BGwriter more valuable.

>I'm not sure how to devise a test case to prove that this can be important, though.

To start with, can't we do simple test where all (most) of the pages are in shared buffers and then run pg_bench select
onlytest? 
This test we can run with various configurations of shared buffers.

I have done the tests similar to above, and it shows good perf. improvement for shared buffers conf. as(25% of RAM).


> Robert wrote an accounting patch a while ago that tallied how often a
> buffer was cleaned but then reclaimed for the same page before being
> evicted.  But now I can't find it.  If you can find that thread, there
> might be some benchmarks posted to it that would be useful.

In my first level search, I am also not able to find it. But now I am planning to check all
mails of Robert Haas on PostgreSQL site (which are approximately 13,000).
If you can tell me how long ago approximately (last year, 2 yrs back, ..) or whether such a patch is submitted to any
CFor was just discussed in mail chain, then it will be little easier for me. 


Thank you for doing the initial review of work.

With Regards,
Amit Kapila.






pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: assertion failure w/extended query protocol
Next
From: Abhijit Menon-Sen
Date:
Subject: Re: [PATCH] explain tup_fetched/returned in monitoring-stats