On 2018-07-25 19:27:47 -0400, Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
> > On 2018-06-28 08:02:10 -0700, Andres Freund wrote:
> >> I wonder why we don't just generally trigger invalidations to an
> >> indexes' "owning" relation in CacheInvalidateHeapTuple()?
>
> > Tom, do you have any comments about the above?
>
> It seems like an ugly and fragile hack, offhand. I can see the point
> about needing to recompute the owning relation's index list, but there's
> no good reason why an update of a pg_index row ought to force that.
> You're using that as a proxy for creation or deletion of an index, but
> (at least in principle) pg_index rows might get updated for existing
> indexes.
Sure, but in at least some of those cases you'd also need to update the
list, as it could invalidate things like the determination of the oid,
pkey or whatnot index. But yea, I don't think it's a *great* idea, just
an idea worth considering.
> Also, it's not clear to me why the existing places that force relcache
> inval on the owning table during index create/delete aren't sufficient
> already. I suppose it's probably a timing problem, but it's not clear
> why hacking CacheInvalidateHeapTuple in this fashion fixes that, or why
> we could expect it to stay fixed.
Hm? As I point out in my email, there's simply nothing forcing an
invalidation in the relevant path? We register an invalidation, but only
*after* successfully building the entire index. So if the index build
fails, but we actually accessed the relcache (as e.g. the new
plan_create_index_workers() pretty much always does), the relcache entry
refers to the failed index build. I think it's basically a failure to
follow proper invalidation procedures - when modifying something
affecting a relcache entry it's not OK to delay the
CacheInvalidateRelcacheByTuple() to after a point where we can fail
nonFATALy.
Greetings,
Andres Freund