On Thu, Aug 24, 2023 at 3:28 PM Stephen Frost <sfrost@snowman.net> wrote:
> Agreed that we'd certainly want to make sure it's all correct and to do
> performance testing but in terms of how many buffers... isn't much of
> the point of this that we have data in memory because it's being used
> and if it's not then we evict it? That is, I wouldn't think we'd have
> set parts of the buffer pool for SLRUs vs. regular data but would let
> the actual demand drive what pages are in the cache and I thought that
> was part of this proposal and part of the reasoning behind making this
> change.
I think that it's not quite that simple. In the regular buffer pool,
access to pages is controlled by buffer pins and buffer content locks,
but these mechanisms don't exist in the same way in the SLRU code. But
buffer pins drive usage counts which drive eviction decisions. So if
you move SLRU data into the main buffer pool, you either need to keep
the current locking regime and use some new logic to decide how much
of shared_buffers to bequeath to the SLRU pools, OR you need to make
SLRU access use buffer pins and buffer content locks. If you do the
latter, I think you substantially increase the cost of an uncontended
SLRU buffer access, because you now need to pin the buffer, and and
then take and release the content lock, and then release the pin;
whereas today you can do all that by just taking and release the
SLRU's lwlock. That's more atomic operations, and hence more costly, I
think. But even if not, it could perform terribly if SLRU buffers
become more vulnerable to eviction than they are at present, or
alternatively if they take over too much of the buffer pool and force
other important data out.
SLRUs are a performance hotspot, so even relatively minor changes to
their performance characteristics can, I believe, have pretty
noticeable effects on performance overall.
--
Robert Haas
EDB: http://www.enterprisedb.com