RE: [HACKERS] couldn't rollback cache ? - Mailing list pgsql-hackers
From | Hiroshi Inoue |
---|---|
Subject | RE: [HACKERS] couldn't rollback cache ? |
Date | |
Msg-id | 000101bf0417$ea264a00$2801007e@cadzone.tpf.co.jp Whole thread Raw |
In response to | Re: [HACKERS] couldn't rollback cache ? (Tom Lane <tgl@sss.pgh.pa.us>) |
Responses |
Re: [HACKERS] couldn't rollback cache ?
|
List | pgsql-hackers |
> -----Original Message----- > From: Tom Lane [mailto:tgl@sss.pgh.pa.us] > Sent: Monday, September 20, 1999 11:28 PM > To: Hiroshi Inoue > Cc: pgsql-hackers > Subject: Re: [HACKERS] couldn't rollback cache ? > > > "Hiroshi Inoue" <Inoue@tpf.co.jp> writes: > > I think it's not done correctly for tuple SI messages either. > > I didn't use current cache invalidation mechanism when I made the > > patch for SearchSysCache() because of the following 2 reasons. > > > 1. SI messages are eaten by CommandCounterIncrement(). So they > > may vanish before transaction end/aborts. > > I think this is OK. The sending backend does not send the SI message > in the first place until it has committed. Other backends can read Doesn't the sending backend send the SI message when Command- CounterIncrement() is executed ? AtCommit_Cache() is called not only from CommitTransaction() but also from CommandCounterIncrement(). AtCommit_Cache() in CommandCounterIncrement() eats local invalidation messages and register SI information (this seems too early for other backends though it's not so harmful). Then AtAtart_ Cache() eats the SI information and invalidates related syscache and relcache for the backend(this seems right). At this point,invali- dation info for the backend vanishes. Isn't it right ? > the messages at CommandCounterIncrement; it doesn't matter whether the > other backends later commit or abort their own transactions. I think. > Do you have a counterexample? > > > 2. The tuples which should be invalidated in case of abort are different > > from ones in case of commit. > > In case of commit,deleting old tuples should be invalidated for all > > backends. > > In case of abort,insert(updat)ing new tuples should be invalidated > > for the insert(updat)ing backend. > > I wonder whether it wouldn't be cleaner to identify the target tuples > by OID instead of ItemPointer. That way would work for both new and > update tuples... > This may be a better way because the cache entry which should be invalidated are invalidated. However,we may invalidate still valid cache entry by OID(it's not so harmful). Even time qualification is useless in this case. > > Currently heap_insert() calls RelationInvalidateHeapTuple() for a > > inserting new tuple but heap_replace() doesn't call RelationInvalid- > > ateHeapTuple() for a updating new tuple. I don't understand which > > is right. > > Hmm. Invalidating the old tuple is the right thing for heap_replace in > terms of sending a message to other backends at commit; it's the old > tuple that they might have cached and need to get rid of. But for > getting rid of this backend's uncommitted new tuple in case of abort, > it's not the right thing. OTOH, your change to add a time qual check > to SearchSysCache would fix that, wouldn't it? Probably. Because time qualification is applied for uncommitted tuples. Regards. Hiroshi Inoue Inoue@tpf.co.jp
pgsql-hackers by date: