On Mon, 2007-03-12 at 22:16 -0700, Luke Lonergan wrote:
> You may know we've built something similar and have seen similar gains.
Cool
> We're planning a modification that I think you should consider: when there
> is a sequential scan of a table larger than the size of shared_buffers, we
> are allowing the scan to write through the shared_buffers cache.
Write? For which operations?
I was thinking to do this for bulk writes also, but it would require
changes to bgwriter's cleaning sequence. Are you saying to write say ~32
buffers then fsync them, rather than letting bgwriter do that? Then
allow those buffers to be reused?
> The hypothesis is that if a relation is of a size equal to or less than the
> size of shared_buffers, it is "cacheable" and should use the standard LRU
> approach to provide for reuse.
Sounds reasonable. Please say more.
-- Simon Riggs EnterpriseDB http://www.enterprisedb.com