Hi,
I've created a trigger that checks the uniqueness of two columns in a
table. Traditionally, one would use a unique constraint, but in my
case, the size of the unique index would be too large and some
performance loss is acceptable. However, the trigger performance seems
to be far below what's indicated by an explain analyze of the query used
in the trigger.
The unique fields consist of a an indexed int8 (volume_id) and a text
field (name). The average ratio of volume_id to name is 1 to 10,000.
The query I'm using to check uniqueness in the trigger is:
...
IF (SELECT EXISTS (SELECT 1 FROM t1 WHERE
volume_id = NEW.volume_id AND name = NEW.name)) THEN
RAISE EXCEPTION '% already exists on volume', NEW.name;
END IF;
...
This trigger is called only BEFORE INSERTS.
In testing, without the trigger, inserts are at 10,000 every 2 seconds.
With the trigger, they are 10,000 every 40 seconds. The output of
explain analyze suggests that this shouldn't be taking so long.
EXPLAIN ANALYZE SELECT EXISTS (
SELECT 1 FROM t1 WHERE volume_id = 300 AND name = 'whodat');
Result (cost=8.35..8.36 rows=1 width=0) (actual time=0.019..0.020
rows=1 loops=1)
InitPlan 1 (returns $0)
-> Index Scan using volume_id_idx on t1 (cost=0.00..8.35 rows=1
width=0) (actual time=0.016..0.016 rows=0 loops=1)
Index Cond: (volume_id = 300)
Filter: (name = 'whodat'::text)
Total runtime: 0.053 ms
(6 rows)
0.053 ms/record / 1,000 ms/sec * 10,000 records = .53 seconds
According to that stat, this lookup should be adding about 0.5 seconds
to 10,000 records, far from the actual 38 seconds it is adding. I've
tried to change up the query in the trigger to see if I could get
different results with not much luck. Any idea what might be taking up
the extra time or what I can do to troubleshoot?
Thanks.
-Randall