Re: Making AFTER triggers act properly in PL functions - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Making AFTER triggers act properly in PL functions
Date
Msg-id 24407.1094681038@sss.pgh.pa.us
Whole thread Raw
In response to Re: Making AFTER triggers act properly in PL functions  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
I wrote:
> Actually, I'd really like to get it back down to the 7.4 size, which was
> already too big :-(.  That might be a vain hope though.

As long as we're talking about hack-slash-and-burn on this data
structure ...

The cases where people get annoyed by the size of the deferred trigger
list are nearly always cases where the exact same trigger is to be fired
on a large number of tuples from the same relation (ie, we're doing a
mass INSERT, mass UPDATE, etc).  Since it's the exact same trigger, all
these events must have identical deferrability properties, and will all
be fired (or not fired) at the same points.

So it seems to me that we could refactor the data structure into some
per-trigger stuff (tgoid, relid, xid, flag bits) associated with an
array of per-event records that hold only the old/new ctid fields, and
get it down to about 12 bytes per tuple instead of forty-some.

However this would lose the current properties concerning event
firing order.  Could we do something where each event stores just
a pointer to some per-trigger data (shared across all like events)
plus the old and new ctid fields?  16 bytes is still way better than
44.

Thoughts?  Am I missing some reason why we could not share status data
across multiple tuples, if their events are otherwise identical?  If
we fail partway through processing the trigger events, I don't see that
we care exactly where.
        regards, tom lane


pgsql-hackers by date:

Previous
From: Stephan Szabo
Date:
Subject: Re: Making AFTER triggers act properly in PL functions
Next
From: Christopher Browne
Date:
Subject: Re: Supporting Encryption in Postgresql