Thread: Re: Remove the limit on the number of entries allowed in catcaches,
Moving from -committers to -hackers: On Wed, 2006-06-14 at 23:08 -0300, Tom Lane wrote: > On small-to-middling databases this wins > because maintaining the LRU list is a waste of time. Sounds good. Can we do the same for the file descriptors in fd.c? Very often the total number of file descriptors is much less than the maximum, so it would make sense to only maintain the LRU when we are using more than 50%-75% of the maximum. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com
"Simon Riggs" <simon@2ndquadrant.com> wrote > > Can we do the same for the file descriptors in fd.c? > > Very often the total number of file descriptors is much less than the > maximum, so it would make sense to only maintain the LRU when we are > using more than 50%-75% of the maximum. > I am not against doing it but AFAIR the LRU file operations is (1) not frequent; (2) the cost is only several CPU circles if we do not run out of fds; (3) the LRU lseek/close/open big cost is still not avoidable when we really run out of fds. So this optimization may be not needed. Or do you have some numbers to show that's a bottleneck for some kind of applications? Regards, Qingqing
Simon Riggs <simon@2ndquadrant.com> writes: > Can we do the same for the file descriptors in fd.c? I haven't seen any indication that fd.c is a performance bottleneck, so I don't see the point. Also, there is an external constraint: we can't simply have thousands of open file descriptors; on most platforms that just Does Not Work. regards, tom lane
On Thu, 2006-06-15 at 17:50 +0800, Qingqing Zhou wrote: > "Simon Riggs" <simon@2ndquadrant.com> wrote > > > > Can we do the same for the file descriptors in fd.c? > > > > Very often the total number of file descriptors is much less than the > > maximum, so it would make sense to only maintain the LRU when we are > > using more than 50%-75% of the maximum. > > > > I am not against doing it but AFAIR the LRU file operations is > (1) not frequent; The LRU moves each time we do FileRead or FileWrite, not just on open/close operations. > (2) the cost is only several CPU circles if we do not run out of > fds; So its not really likely ever to show high on oprofile, but if its an avoidable operation that isn't always needed, why do it? > (3) the LRU lseek/close/open big cost is still not avoidable when we > really run out of fds. Agreed, but the limit is reasonably high, so this isn't anywhere near being something we always hit otherwise we would be more worried. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com
Simon Riggs <simon@2ndquadrant.com> writes: > The LRU moves each time we do FileRead or FileWrite, not just on > open/close operations. Sure, but those still require kernel calls, so the cost of a couple of pointer swings is negligible. There's no way that the logical complexity of sometimes maintaining LRU and sometimes not is going to be repaid with a useful (or even measurable) speedup. regards, tom lane