Re: Protect syscache from bloating with negative cache entries - Mailing list pgsql-hackers

From Kyotaro HORIGUCHI
Subject Re: Protect syscache from bloating with negative cache entries
Date
Msg-id 20171222.134716.88479707.horiguchi.kyotaro@lab.ntt.co.jp
Whole thread Raw
In response to Re: Protect syscache from bloating with negative cache entries  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Protect syscache from bloating with negative cache entries  (Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>)
List pgsql-hackers
At Tue, 19 Dec 2017 13:14:09 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in <748.1513707249@sss.pgh.pa.us>
> Robert Haas <robertmhaas@gmail.com> writes:
> > On Tue, Dec 19, 2017 at 3:31 AM, Kyotaro HORIGUCHI
> > <horiguchi.kyotaro@lab.ntt.co.jp> wrote:
> >> I see three kinds of syscache entries.
> >> 
> >> A. An entry for an actually existing object.
> >> B. An entry for an object which once existed but no longer.
> >> C. An entry for a just non-existent objects.
> 
> > I'm not convinced that it's useful to divide things up this way.
> 
> Actually, I don't believe that case B exists at all; such an entry
> should get blown away by syscache invalidation when we commit the
> DROP command.  If one were to stick around, we'd risk false positive
> lookups later.

As I have shown upthread, access to a temporary table (*1) leaves
several STATRELATTINH entries after DROPing, and it doesn't have
a chance to be deleted. SELECTing a nonexistent table in a schema
(*2) also leaves a RELNAMENSP entry after DROPing the schema. I'm
not sure that the latter happen so frequently but the former
happens rather frequently and quickly bloats the syscache once
happens. However no false positive can happen since such entiries
cannot be reached without parent objects, but on the other hand
they have no chance to be deleted.

*1:  begin; create temp table t1 (a int, b int, c int, d int, e int, f int, g int, h int, i int, j int) on commit drop;
insertinto t1 values (1, 2, 3, 4, 5, 6, 7, 8, 9, 10); select * from t1; commit;
 

*2: create schema foo; select * from foo.invalid; drop schema foo;

> > I guess I still don't see why B is different.  If somebody sits there
> > and runs queries against non-existent table names at top speed, maybe
> > they'll query the same non-existent table entries more than once, in
> > which case keeping the negative entries for the non-existent table
> > names around until they stop doing it may improve performance.
> 
> FWIW, my recollection is that the reason for negative cache entries
> is that there are some very common patterns where we probe for object
> names (not just table names, either) that aren't there, typically as
> a side effect of walking through the search_path looking for a match
> to an unqualified object name.  Those cache entries aren't going to
> get any less useful than the positive entry for the ultimately-found
> object.  So from a lifespan point of view I'm not very sure that it's
> worth distinguishing cases A and C.

Agreed.

> It's conceivable that we could rewrite all the lookup algorithms
> so that they didn't require negative cache entries to have good
> performance ... but I doubt that that's easy to do.

That sounds to me to be the same as improving the performance of
systable scan as the same as local hash. Lockless systable
(index) might work (if possible)?

Anyway, I think we are reached to a consensus that the
time-tick-based expiration is promising. So I'll work on the way
as the first step.

Thanks!

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



pgsql-hackers by date:

Previous
From: Amit Langote
Date:
Subject: Re: [HACKERS] path toward faster partition pruning
Next
From: Craig Ringer
Date:
Subject: Re: Using ProcSignal to get memory context stats from a running backend