"Eric B.Ridge" <ebr@tcdi.com> writes:
> [ update timestamp via a rule ]
> explain analyze update foo_view set id = 1 where id = 1;
> Average runtime for 10 executions: 0.165ms
> [ update timestamp via a trigger ]
> explain analyze update foo2 set id = 1 where id = 1;
> Average runtime for 10 executions: 0.328ms
This surprises me. There's a moderate amount of overhead involved in
a plpgsql trigger, but I'd not have thought it would swamp the added
inefficiencies involved in a rule. Notice that you're getting a double
indexscan in the rule case --- that takes more time to plan, and more
time to execute (observe the nearly double actual time for the top level
plan node).
What were you averaging here --- just the "total runtime" reported by
EXPLAIN ANALYZE? It would be interesting to factor in the planning time
too. Could you retry this and measure the total elapsed time? (psql's
\timing command will help.)
regards, tom lane