Thomas Lockhart writes:
> Presumably there is an upper limit to the physical cache size. Would
> retaining negative entries tend to cause the cache to cycle or to grow
> without bounds if there is no such limit? Or does it seem that things
> would reach a reasonable steady state no matter what the query topology
> tends to be?
I think the key would be that you cache N-1 failed function lookups only
if you are actually successful finding a useful function in the Nth
attempt. Then, if your logical cache size is C you would have quick
access to C/N function resolution paths, whereas right now the cache is
really quite useless for function resolution that requires unsuccessful
lookups along the way. Note that if your queries are "well written" in
that they don't require any unsuccessful lookups, the cache behaviour
wouldn't change. Since N is usually small in reasonable applications you
could also simply increase your cache size by a factor of N to compensate
for whatever you might be afraid of.
Perhaps Tom Lane was also thinking ahead in terms of schema lookups. I
imagine this negative cache scheme would really be critical there.
However, what you probably wouldn't want to do is cache negative lookups
that don't end up producing results or are not part of a search chain at
all. Those are user errors and not likely to be repeated and do not need
to be optimized.
--
Peter Eisentraut peter_e@gmx.net