Re: VACUUM FULL versus system catalog cache invalidation - Mailing list pgsql-hackers

From Robert Haas
Subject Re: VACUUM FULL versus system catalog cache invalidation
Date
Msg-id CA+TgmoZOiO1gzxHA67AH+7T+SUkJt66O9ppSNC6d-uJz5qHBZQ@mail.gmail.com
Whole thread Raw
In response to VACUUM FULL versus system catalog cache invalidation  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: VACUUM FULL versus system catalog cache invalidation
Re: VACUUM FULL versus system catalog cache invalidation
Re: VACUUM FULL versus system catalog cache invalidation
List pgsql-hackers
On Fri, Aug 12, 2011 at 2:09 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> But in 9.0 and up, we have a problem.  So far I've thought of two possible
> avenues to fix it:
>
> 1. When a SHAREDINVALCATALOG_ID inval message comes in (telling us a VAC
> FULL or CLUSTER just finished on a system catalog), enter that message
> into the transaction's local inval list as well as executing it
> immediately.  On transaction abort, redo the resulting cache flushes a
> second time.  This would ensure we flushed any bad entries even though
> they were logged with obsolete TIDs elsewhere in the inval list.
> (We might need to do this on commit too, though right now I don't think
> so.  Also, a cache reset event would a fortiori have to queue a similar
> entry to blow things away a second time at transaction end.)
>
> 2. Forget about targeting catcache invals by TID, and instead just use the
> key hash value to determine which cache entries to drop.
>
> Approach #2 seems a lot less invasive and more trustworthy, but it has the
> disadvantage that cache invals would become more likely to blow away
> entries unnecessarily (because of chance hashvalue collisions), even
> without any VACUUM FULL being done.  If we could make approach #1 work
> reliably, it would result in more overhead during VACUUM FULL but less at
> other times --- or at least we could hope so.  In an environment where
> lots of sinval overflows and consequent resets happen, we might come out
> behind due to doubling the number of catcache flushes forced by a reset
> event.
>
> Right at the moment I'm leaning to approach #2.  I wonder if anyone
> sees it differently, or has an idea for a third approach?

I don't think it really matters whether we occasionally blow away an
entry unnecessarily due to a hash-value collision.  IIUC, we'd only
need to worry about hash-value collisions between rows in the same
catalog; and the number of entries that we have cached had better be
many orders of magnitude less than 2^32.  If the cache is large enough
that we're having hash value collisions more than once in a great
while, we probably should have flushed some entries out of it a whole
lot sooner and a whole lot more aggressively, because we're likely
eating memory like crazy.

On the other hand, having to duplicate sinval resets seems like a bad
thing.  So +1 for #2.

> BTW, going forward it might be interesting to think about invalidating
> the catcaches based on xmin/xmax rather than specific TIDs.  That would
> reduce the sinval traffic to one message per transaction that had
> affected the catalogs, instead of one per TID.  But that clearly is not
> going to lead to something we'd dare back-patch.

To be honest, I am not real keen on back-patching either of the
options you list above, either.  I think we should probably do
something about the problem Dave Gould reported (to wit: sinval resets
need to be smarter about the order they do things in), but I am not
sure it's a good idea to back-patch what seems like a fairly radical
overhaul of the sinval mechanism to fix a problem nobody's actually
complained about hitting in production, especially given there's no
certainty that this is the last bug.  Perhaps we should just fix this
one in master and consider back-patching it if and when we get some
plausibly related bug reports.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgsql-hackers by date:

Previous
From: "David E. Wheeler"
Date:
Subject: Re: Further news on Clang - spurious warnings
Next
From: Robert Haas
Date:
Subject: Re: Extra check in 9.0 exclusion constraint unintended consequences