Re: Protect syscache from bloating with negative cache entries - Mailing list pgsql-hackers

From Tomas Vondra
Subject Re: Protect syscache from bloating with negative cache entries
Date
Msg-id 883b2580-027e-4cb4-fb21-679c14fdb50a@2ndquadrant.com
Whole thread Raw
In response to Re: Protect syscache from bloating with negative cache entries  ("MauMau" <maumau307@gmail.com>)
Responses RE: Protect syscache from bloating with negative cache entries
List pgsql-hackers

On 2/8/19 2:27 PM, MauMau wrote:
> From: Tomas Vondra
>> I don't think we need to remove the expired entries right away, if
>> there are only very few of them. The cleanup requires walking the
>> hash table, which means significant fixed cost. So if there are
>> only few expired entries (say, less than 25% of the cache), we can
>> just leave them around and clean them if we happen to stumble on
>> them (although that may not be possible with dynahash, which has no
>> concept of expiration) of before enlarging the hash table.
> 
> I agree in that we don't need to evict cache entries as long as the 
> memory permits (within the control of the DBA.)
> 
> But how does the concept of expiration fit the catcache?  How would 
> the user determine the expiration time, i.e. setting of 
> syscache_prune_min_age?  If you set a small value to evict
> unnecessary entries faster, necessary entries will also be evicted.
> Some access counter would keep accessed entries longer, but some idle
> time (e.g. lunch break) can flush entries that you want to access
> after the lunch break.
> 

I'm not sure what you mean by "necessary" and "unnecessary" here. What
matters is how often an entry is accessed - if it's accessed often, it
makes sense to keep it in the cache. Otherwise evict it. Entries not
accessed for 5 minutes are clearly not accessed very often, so and
getting rid of them will not hurt the cache hit ratio very much.

So I agree with Robert a time-based approach should work well here. It
does not have the issues with setting exact syscache size limit, it's
kinda self-adaptive etc.

In a way, this is exactly what the 5 minute rule [1] says about caching.

[1] http://www.hpl.hp.com/techreports/tandem/TR-86.1.pdf


> The idea of expiration applies to the case where we want possibly 
> stale entries to vanish and load newer data upon the next access.
> For example, the TTL (time-to-live) of Memcached, Redis, DNS, ARP.
> Is the catcache based on the same idea with them?  No.
> 

I'm not sure what has this to do with those other databases.

> What we want to do is to evict never or infrequently used cache
> entries.  That's naturally the task of LRU, isn't it?  Even the high
> performance Memcached and Redis uses LRU when the cache is full.  As
> Bruce said, we don't have to be worried about the lock contention or
> something, because we're talking about the backend local cache.  Are
> we worried about the overhead of manipulating the LRU chain?  The
> current catcache already does it on every access; it calls
> dlist_move_head() to put the accessed entry to the front of the hash
> bucket.
> 

I'm certainly worried about the performance aspect of it. The syscache
is in a plenty of hot paths, so adding overhead may have significant
impact. But that depends on how complex the eviction criteria will be.

And then there may be cases conflicting with the criteria, i.e. running
into just-evicted entries much more often. This is the issue with the
initially proposed hard limits on cache sizes, where it'd be trivial to
under-size it just a little bit.

> 
>> So if we want to address this case too (and we probably want), we 
>> may need to discard the old cache memory context somehow (e.g. 
>> rebuild the cache in a new one, and copy the non-expired entries). 
>> Which is a nice opportunity to do the "full" cleanup, of course.
> 
> The straightforward, natural, and familiar way is to limit the cache 
> size, which I mentioned in some previous mail.  We should give the
> DBA the ability to control memory usage, rather than considering what
> to do after leaving the memory area grow unnecessarily too large.
> That's what a typical "cache" is, isn't it?
> 

Not sure which mail you're referring to - this seems to be the first
e-mail from you in this thread (per our archives).

I personally don't find explicit limit on cache size very attractive,
because it's rather low-level and difficult to tune, and very easy to
get it wrong (at which point you fall from a cliff). All the information
is in backend private memory, so how would you even identify syscache is
the thing you need to tune, or how would you determine the correct size?

> https://en.wikipedia.org/wiki/Cache_(computing)
> 
> "To be cost-effective and to enable efficient use of data, caches must
> be relatively small."
> 

Relatively small compared to what? It's also a question of how expensive
cache misses are.

> 
> Another relevant suboptimal idea would be to provide each catcache
> with a separate memory context, which is the child of
> CacheMemoryContext.  This gives slight optimization by using the slab
> context (slab.c) for a catcache with fixed-sized tuples.  But that'd
> be a bit complex, I'm afraid for PG 12.
> 

I don't know, but that does not seem very attractive. Each memory
context has some overhead, and it does not solve the issue of never
releasing memory to the OS. So we'd still have to rebuild the contexts
at some point, I'm afraid.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


pgsql-hackers by date:

Previous
From: Andres Freund
Date:
Subject: Re: libpq compression
Next
From: Alvaro Herrera
Date:
Subject: Re: Fixing findDependentObjects()'s dependency on scan order(regressions in DROP diagnostic messages)