Re: Reducing the memory footprint of large sets of pending triggers - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Reducing the memory footprint of large sets of pending triggers
Date
Msg-id 27581.1224938919@sss.pgh.pa.us
Whole thread Raw
In response to Re: Reducing the memory footprint of large sets of pending triggers  (Simon Riggs <simon@2ndQuadrant.com>)
Responses Re: Reducing the memory footprint of large sets of pending triggers
Re: Reducing the memory footprint of large sets of pending triggers
List pgsql-hackers
Simon Riggs <simon@2ndQuadrant.com> writes:
> A much better objective would be to remove duplicate trigger calls, so
> there isn't any build up of trigger data in the first place. That would
> apply only to immutable functions. RI checks certainly fall into that
> category.

They're hardly "duplicates": each event is for a different tuple.

For RI checks, once you get past a certain percentage of the table it'd
be better to throw away all the per-tuple events and do a full-table
verification a la RI_Initial_Check().  I've got no idea about a sane
way to make that happen, though.
        regards, tom lane


pgsql-hackers by date:

Previous
From: Simon Riggs
Date:
Subject: Re: Reducing the memory footprint of large sets of pending triggers
Next
From: Tom Lane
Date:
Subject: Impending back branch update releases