On Thu, May 22, 2014 at 10:52 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Jeff Janes <jeff.janes@gmail.com> writes:
>> On Wed, May 21, 2014 at 7:48 PM, Joe Van Dyk <joe@tanga.com> wrote:
>>> I was expecting that the RI update triggers would have a "when (new.key is
>>> distinct from old.key)" condition on them, which would mean that the number
>>> of referencing tables wouldn't matter.
>
>> But that condition is checked for each constraint individually, not for all
>> constraints simultaneously. A table can be referenced on multiple
>> combinations of columns, so just one check may not suffice. I guess the
>> triggers could be organized into groups of identical firing criteria and
>> then checked only once per group, but that seems like a pretty obscure
>> optimization to make. I don't know how you would reorganize such groupings
>> in a concurrency safe way when constraints were added or removed.
>
> FWIW, I profiled this example (after cranking it up to 500 target tables
> just because). AFAICT the primary component of the runtime increase is
> query startup overhead associated with the increased number of target
> tables.
I must be missing something, there's only one table being updated?
start_time = clock_timestamp();
FOR i IN 1..100000 LOOP
UPDATE test_fk SET junk = ' '
WHERE id = i;
END LOOP;
end_time = clock_timestamp();
Joe
> If the UPDATE were touching more than one tuple then it might
> get to the point where per-tuple costs dominate, but it's not there in
> this example. If we tried to do something like what Jeff suggests to
> improve the per-tuple costs, it could actually make this example slower
> by adding more startup overhead.
>
> regards, tom lane