Thread: Triggers do not fire
Hallo! I have written a very small test procedure to show a possible error on PostgreSQL V7.1.1. A trigger does not fire the function on update but on inserts. Does someone have made the same experiences? Here is the short example which is doing nothing important, just showing the situation: DROP TABLE test; CREATE TABLE test ("sensor_id" int4 NOT NULL,"epoch" datetime NOT NULL,"value" float4 NOT NULL, PRIMARY KEY (sensor_id,epoch)); DROP FUNCTION f_test(); CREATE FUNCTION f_test() RETURNS OPAQUE AS ' BEGIN RAISE NOTICE ''Fired %'',TG_OP; RETURN NULL; END; ' LANGUAGE 'plpgsql'; DROP TRIGGER t_test; CREATE TRIGGER t_test BEFORE INSERT OR UPDATE ON test FOR EACH ROW EXECUTE PROCEDURE f_test(); INSERT INTO test VALUES(1,'2000-10-11 12:00:00',-20.2); UPDATE test SET value = 1000.0 WHERE epoch = '2000-10-11 12:10:00' AND sensor_id = 1; The result is as follows: INSERT INTO test VALUES(1,'2000-10-11 12:00:00',-20.2); NOTICE: Fired INSERT INSERT 0 0 UPDATE test SET value = 1000.0 WHERE epoch = '2000-10-11 12:10:00' AND sensor_id = 1; UPDATE 0 The insert notice can be shown! The update notice is not there! Why? -- Mit freundlichen Gruessen / With best regards Reiner Dassing
> The result is as follows: > INSERT INTO test VALUES(1,'2000-10-11 12:00:00',-20.2); > NOTICE: Fired INSERT > INSERT 0 0 > UPDATE test SET value = 1000.0 WHERE epoch = '2000-10-11 12:10:00' AND > sensor_id = 1; > UPDATE 0 > > The insert notice can be shown! > The update notice is not there! > > Why? My guess... Because there are no rows the update matches? By returning NULL, you are aborting the insert (see INSERT 0 0) and thus there are no rows for the update to do so no triggers are run.
Reiner Dassing <dassing@wettzell.ifag.de> writes: > I have written a very small test procedure to show a possible error > on PostgreSQL V7.1.1. The error is yours: you set up the trigger function to return NULL, which means it's telling the system not to allow the INSERT or UPDATE. > INSERT INTO test VALUES(1,'2000-10-11 12:00:00',-20.2); > NOTICE: Fired INSERT > INSERT 0 0 Note the summary line saying that zero rows were inserted. > UPDATE test SET value = 1000.0 WHERE epoch = '2000-10-11 12:10:00' AND > sensor_id = 1; > UPDATE 0 Here, zero rows were updated, so of course there was nothing to fire the trigger on. regards, tom lane
Your update trigger is fired FOR EACH ROW and no rows are updated i.e. no trigger fired! On Tue, 16 Oct 2001 14:48:59 +0200, Reiner Dassing <dassing@wettzell.ifag.de> wrote: > Hallo! > > I have written a very small test procedure to show a possible error > on PostgreSQL V7.1.1. > A trigger does not fire the function on update but on inserts. > Does someone have made the same experiences? > > > Here is the short example which is doing nothing important, just showing > the > situation: > > DROP TABLE test; > CREATE TABLE test ( > "sensor_id" int4 NOT NULL, > "epoch" datetime NOT NULL, > "value" float4 NOT NULL, > PRIMARY KEY (sensor_id,epoch)); > > DROP FUNCTION f_test(); > CREATE FUNCTION f_test() RETURNS OPAQUE AS ' > BEGIN > RAISE NOTICE ''Fired %'',TG_OP; > RETURN NULL; > END; > ' LANGUAGE 'plpgsql'; > > > DROP TRIGGER t_test; > CREATE TRIGGER t_test BEFORE INSERT OR UPDATE ON test FOR EACH ROW > EXECUTE PROCEDURE f_test(); > > INSERT INTO test VALUES(1,'2000-10-11 12:00:00',-20.2); > UPDATE test SET value = 1000.0 WHERE epoch = '2000-10-11 12:10:00' AND > sensor_id = 1; > > The result is as follows: > INSERT INTO test VALUES(1,'2000-10-11 12:00:00',-20.2); > NOTICE: Fired INSERT > INSERT 0 0 > UPDATE test SET value = 1000.0 WHERE epoch = '2000-10-11 12:10:00' AND > sensor_id = 1; > UPDATE 0 > > The insert notice can be shown! > The update notice is not there! > > Why? > > > -- > Mit freundlichen Gruessen / With best regards > Reiner Dassing > > ---------------------------(end of broadcast)--------------------------- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to majordomo@postgresql.org) Aasmund Midttun Godal aasmund@godal.com - http://www.godal.com/ +47 40 45 20 46
Hello Tom! Thank you for your help! Your hints did solve the asked problem. The update trigger is fired if there is a row in the table test to be updated. But, behind my question there was another design (philosophie) which I am trying to solve by the means of different triggers. This is off topic, but ... Maybe, you or somebody on the list can give a hint how solve the following task: I have a table which has a lot of entries (some x millions) of the kind (id, timestamp, value) The access (selects) is concentrated to the timely last some thousands entries. To adapt this fact I want to setup a "virtual" table - test in my example - which is accessed by the clients but in reality the entries are separated to different small tables. These table are dynamically created to hold the values distinguished by years. By the use of triggers I can redirect inserts from the table test to small tables, called test_x_y with x = id, y=year. To update an entry the value in the main table must exist, therefore, this approach does not work. Rules do also not work as I must dynamically build table names. And the execution of pgsql functions is not possible in rules. Correct? Another possible approach would be to make selects which give back results to be used as table names. I.e., it would be necessary to have something like create table f(NEW.val) as select .... where f(...) gives back a name of a table. Tom Lane wrote: > > Reiner Dassing <dassing@wettzell.ifag.de> writes: > > I have written a very small test procedure to show a possible error > > on PostgreSQL V7.1.1. > > The error is yours: you set up the trigger function to return NULL, > which means it's telling the system not to allow the INSERT or UPDATE. > > > INSERT INTO test VALUES(1,'2000-10-11 12:00:00',-20.2); > > NOTICE: Fired INSERT > > INSERT 0 0 > > Note the summary line saying that zero rows were inserted. > > > UPDATE test SET value = 1000.0 WHERE epoch = '2000-10-11 12:10:00' AND > > sensor_id = 1; > > UPDATE 0 > > Here, zero rows were updated, so of course there was nothing to fire > the trigger on. > > regards, tom lane -- Mit freundlichen Gruessen / With best regards Reiner Dassing
Reiner Dassing <dassing@wettzell.ifag.de> writes: > I have a table which has a lot of entries (some x millions) of the kind > (id, timestamp, value) > The access (selects) is concentrated to the timely last some thousands > entries. > To adapt this fact I want to setup a "virtual" table - test in my > example - which > is accessed by the clients but in reality the entries are separated to > different small > tables. These table are dynamically created to hold the values > distinguished by years. Why bother? Seems like you are just making life complicated for yourself. One big table with a suitable index ought to work fine. regards, tom lane
I can vouch for that. I have several tables with 10 to 16 million entries in much the same sort of setup as you are describing (primary key, timestamp, value). PostgreSQL is will quite happily use the timestamp indexes when accessing this table, and it doesn't default to a sequential scan until a considerable number of tuples would be searched. For example: processdata=> explain select count(*) from caseweights1 where dt > '2001-10-11'; NOTICE: QUERY PLAN: Aggregate (cost=255053.37..255053.37 rows=1 width=0) -> Index Scan using caseweights1_dt_idx on caseweights1 (cost=0.00..254827.01 rows=90544 width=0) EXPLAIN processdata=> select count(*) from caseweights1 where dt > '2001-10-11';count --------146773 (1 row) processdata=> select count(*) from caseweights1; count ----------14984087 (1 row) As you can see, even though my table is fairly large PostgreSQL will happily use indexes for queries even when there is a significant number of tuples that are to be accessed. The count command with the index took perhaps a second on my 400MHz 128M ram normal IDE hard drive test server. The count of all the records, on the other hand, triggered a sequential scan that took a long time to complete. In other words, chances are good that PostgreSQL will handle your data without special modification. Jason --- Tom Lane <tgl@sss.pgh.pa.us> wrote: > Reiner Dassing <dassing@wettzell.ifag.de> writes: > > I have a table which has a lot of entries (some x > millions) of the kind > > (id, timestamp, value) > > The access (selects) is concentrated to the timely > last some thousands > > entries. > > To adapt this fact I want to setup a "virtual" > table - test in my > > example - which > > is accessed by the clients but in reality the > entries are separated to > > different small > > tables. These table are dynamically created to > hold the values > > distinguished by years. > > Why bother? Seems like you are just making life > complicated for > yourself. One big table with a suitable index ought > to work fine. > > regards, tom lane > > ---------------------------(end of > broadcast)--------------------------- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/users-lounge/docs/faq.html __________________________________________________ Do You Yahoo!? Make a great connection at Yahoo! Personals. http://personals.yahoo.com