Re: AW: Coping with huge deferred-trigger lists - Mailing list pgsql-hackers

From Tom Lane
Subject Re: AW: Coping with huge deferred-trigger lists
Date
Msg-id 10585.989695244@sss.pgh.pa.us
Whole thread Raw
In response to Re: AW: Coping with huge deferred-trigger lists  (Hiroshi Inoue <Inoue@tpf.co.jp>)
List pgsql-hackers
Hiroshi Inoue <Inoue@tpf.co.jp> writes:
>> I thought that this current placing of new rows at end of file is subject to
>> change soon (overwrite smgr) ?

> Even under current smgr, new rows aren't necessarily at the end.

Hmm ... you're right, heap_update will try to store an updated tuple on
the same page as its original.

That doesn't make my suggestion unworkable, however, since this case is
not very likely to occur except on pages at/near the end of file.  One
way to deal with it is to keep a list of pages (still not individual
tuples) that contain tuples we need to revisit for deferred triggers.
The list would be of the form "scan these individual pages plus all
pages from point X to the end of file", where point X would be at or
perhaps a little before the end of file as it stood at the start of the
transaction.  We'd only need to explicitly store the page numbers for
relatively few pages, usually.

BTW, thanks for pointing that out --- it validates my idea in another
thread that we can avoid locking on every single call to
RelationGetBufferForTuple, if it's OK to store newly inserted tuples
on pages that aren't necessarily last in the file.
        regards, tom lane


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: State of PL/Python build
Next
From: Bruce Momjian
Date:
Subject: Re: Converting PL/SQL to PL/PGSQL