Re: 7.3 schedule - Mailing list pgsql-hackers

From Christopher Kings-Lynne
Subject Re: 7.3 schedule
Date
Msg-id 002301c1e2b3$804bd000$0200a8c0@SOL
Whole thread Raw
In response to Re: 7.3 schedule  ("Christopher Kings-Lynne" <chriskl@familyhealth.com.au>)
Responses Re: 7.3 schedule
Re: 7.3 schedule
List pgsql-hackers
> > thought out way of predicting/limiting their size.  (2) How the heck do
> > you get rid of obsoleted cached plans, if the things stick around in
> > shared memory even after you start a new backend?  (3) A shared cache
> > requires locking; contention among multiple backends to access that
> > shared resource could negate whatever performance benefit you might hope
> > to realize from it.

I don't understand all these locking problems?  Surely the only lock a
transaction would need on a stored query is one that prevents the cache
invalidation mechanism from deleting it out from under it?  Surely this
means that there would be tonnes of readers on the cache - none of them
blocking each other, and the odd invalidation event that needs a complete
lock?

Also, as for invalidation, there probably could be just two reasons to
invalidate a query in the cache.  (1)  The cache is running out of space and
you use LRU or something to remove old queries, or (2) someone runs ANALYZE,
in which case all cached queries should just be flushed?  If they specify an
actual table to analyze, then just drop all queries on the table.

Could this cache mechanism be used to make views fast as well?  You could
cache the queries that back views on first use, and then they can follow the
above rules for flushing...

Chris




pgsql-hackers by date:

Previous
From: Peter Eisentraut
Date:
Subject: Re: Suggestions please: names for function cachabilityattributes
Next
From: "Christopher Kings-Lynne"
Date:
Subject: Re: numeric/decimal docs bug?