Thread: unknown error message
Hi, I'm getting the following error from my python script, which tries to insert lots of data in one transaction: libpq.OperationalError: ERROR: failed to fetch new tuple for AFTER trigger I have several AFTER triggers in place, which one raises this error? I'm sure I only INSERT data, no UPDATE nor DELETE, so for what reason could a trigger fail to fetch a new tuple? Can deferred constraints raise such errors? Because AFAICT no triggers should be invoked by the very SQL command which causes the error... strange enough! Any hints on where to look for the bug? I'm on a PostgreSQL 8.2devel snapshop from 06/06/2006 (silly date :-) using PL/Python for triggers. I'm unable to paste the schema - plus it's quite large and - as said - I have no clue what part of it raises the error. Thanks for you help. Regards Markus
Markus Schiltknecht <markus@bluegap.ch> writes: > I'm getting the following error from my python script, which tries to > insert lots of data in one transaction: > libpq.OperationalError: ERROR: failed to fetch new tuple for AFTER trigger That's not supposed to happen. Can you provide a test case? regards, tom lane
Hello Tom, in the mean time I've traced the problem down to a 'CONSTRAINT' declared as 'DEFERRABLE INITIALLY DEFERED'. As soon as I remove that constraint the transaction commits nicely. BTW: I'm running in ISOLATION LEVEL SERIALIZABLE, if that matters. Has something changed so that it's worth trying current CVS? I'll try to come up with a test case, the problem is not easy to isolate, though. Regards Markus Tom Lane wrote: > Markus Schiltknecht <markus@bluegap.ch> writes: >> I'm getting the following error from my python script, which tries to >> insert lots of data in one transaction: >> libpq.OperationalError: ERROR: failed to fetch new tuple for AFTER trigger > > That's not supposed to happen. Can you provide a test case? > > regards, tom lane > > ---------------------------(end of broadcast)--------------------------- > TIP 1: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to majordomo@postgresql.org so that your > message can get through to the mailing list cleanly >
Markus Schiltknecht <markus@bluegap.ch> writes: > in the mean time I've traced the problem down to a 'CONSTRAINT' declared > as 'DEFERRABLE INITIALLY DEFERED'. As soon as I remove that constraint > the transaction commits nicely. BTW: I'm running in ISOLATION LEVEL > SERIALIZABLE, if that matters. The deferred constraint is probably the victim here (ie, it's responsible for there being a trigger to fire). The question is where did its tuple go? > Has something changed so that it's worth trying current CVS? I'll try to > come up with a test case, the problem is not easy to isolate, though. Right offhand the only way that I could see for the tuple to disappear before the trigger fires is if a concurrent VACUUM removed it, which should not happen for a tuple inserted by a still-active transaction. If you've got autovacuum running in the background, the unpredictability of vacuum timing might be contributing to difficulty of reproducing. regards, tom lane
Markus Schiltknecht <markus@bluegap.ch> writes: > Sorry, I was unclear: luckily the error is easily reproducible. The > problem is just to cut unnecessary parts and to come up with a small > test case. I'll give it a try tomorrow. > As far as VACUUM is concerned: I run this data-loading script just after > CREATE DATABASE. In my tests, not more than some 100 rows (rough guess) > are inserted until the error occurs. It would surprise me if that would > have fired an autovacuum every time I tried. Yeah, that makes it sound more like a garden-variety bug. Please do try to make a test case. regards, tom lane
On Mon, 2006-07-24 at 14:54 -0400, Tom Lane wrote: > Right offhand the only way that I could see for the tuple to disappear > before the trigger fires is if a concurrent VACUUM removed it, which > should not happen for a tuple inserted by a still-active transaction. > If you've got autovacuum running in the background, the unpredictability > of vacuum timing might be contributing to difficulty of reproducing. Sorry, I was unclear: luckily the error is easily reproducible. The problem is just to cut unnecessary parts and to come up with a small test case. I'll give it a try tomorrow. As far as VACUUM is concerned: I run this data-loading script just after CREATE DATABASE. In my tests, not more than some 100 rows (rough guess) are inserted until the error occurs. It would surprise me if that would have fired an autovacuum every time I tried. Plus as you say, it all happens in the same transaction. (Oh, and I even tried with a sleep(5) before the commit and the error still occurred. I guess an autovacuum most probably won't take that long with so few rows.) Regards Markus