Thread: pgsql-function called twice in the same second -> Double Insert -> Error
Hi there, I am administering an online (web) learning portal (OpenACS + Postgres) where students can solve multiple choice exercises online. Every result of a exercise usage is saved to the database. Students can see which exercises they have solved successfully, and other stats. The whole exercise and testing module works quite good since weeks, but from time to time I noticed error messages. "cannot insert duplicate key into unique index..." Now I think have found the problem: There is a db-function that saves the result of every exercise usage, and this function is sometimes called twice in the exact same second (when the server is extremely busy). Probably the user clicked two times on the form submit button in the browser. The exercise usage process consists of 2 steps: When the student starts the exercise the corresponding entry is "locked". (the id of the exercise is inserted into a locking table) Every exercise usage generates a usage_id that is recorded in the locking table, so the exercise can not be deleted while used. When an answer to the exercise is received, the exercise usage data is inserted into another table and the locking entry is removed. So the function first tries to retrieve data from the locking table, saves (inserts) that data in another table and deletes the entry from the locking table. If this function is called twice at the same time, both runs get the data from the locking table and both try to do the insert. in the first run, the commit succeeds, but in the second run an error is generated and the function fails. Is this something I can circumvent on the Postgres (DB) Level or do I have to deal with that at the application level (Aolserver - TCL) ? Many TIA! peter PS: The db-function I mentioned is attached.
Attachment
Re: pgsql-function called twice in the same second -> Double Insert -> Error
From
Shridhar Daithankar
Date:
On Tuesday 26 November 2002 07:25 pm, you wrote: > Is this something I can circumvent on the Postgres (DB) Level or do I > have to deal with that at the application level (Aolserver - TCL) ? Looking at the description, I think you don't need to worry about anything. The database is doing one transaction fine and aborting other which is fine as well because the second call of function with exactly same parameters does not make any sense either. If you don't like the message, try sleeping in function or call select for update so that it locks the row and can not get the lock in itself.. This seems to be better approach to me.. HTH Shridhar
On Tue, 26 Nov 2002, Peter Alberer wrote: > The exercise usage process consists of 2 steps: When the student starts > the exercise the corresponding entry is "locked". (the id of the > exercise is inserted into a locking table) Every exercise usage > generates a usage_id that is recorded in the locking table, so the > exercise can not be deleted while used. When an answer to the exercise > is received, the exercise usage data is inserted into another table and > the locking entry is removed. > > So the function first tries to retrieve data from the locking table, > saves (inserts) that data in another table and deletes the entry from > the locking table. If this function is called twice at the same time, > both runs get the data from the locking table and both try to do the > insert. in the first run, the commit succeeds, but in the second run an > error is generated and the function fails. You might want that first select to be a SELECT FOR UPDATE so that the second transaction trying to get the same row waits for the first transaction to run to completion.