Tsirkin Evgeny wrote:
> On Mon, 6 Sep 2004, Michael Paesold wrote:
> Does not the Serializable Isolation Level do insure that?
> what i thought is that while using this level then i am getting
> the BEGIN and COMMIT to behave the same as the code you wrote!
> since the second concarent transaction (the one that started a little
> later) should wait for the first one and only then DELETE .
> Is not that true?
BEGIN;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
DELETE FROM schedule WHERE studentid = ... ;
INSERT INTO schedule (studentid, ...) VALUES (... );
INSERT INTO schedule (studentid, ...) VALUES (... );
COMMIT;
If you do it like in the above sql code, there is still a problem. The
serializable checking here only works, if DELETE FROM schedule... really
finds at least one row. If it does not, it will not recognize the
serialization problem. So it's still possible that your programm creates
duplicates.
I have tested this here. I don't really know if this is just the case with
PostgreSQL serializable transactions (MVCC limitation) or a general problem.
> PS Does not the perl::DBI use the serializable level?
I don't really know about DBI, you should really check that.
* check that autocommit is off
* check that transaction isolation level is serializable
Still serializable transaction level is not enough. An aproach that allows
more concurrency is possible, but you have to make sure, that all relevant
code does the locking in the same way:
(assuming you have a student table with studentid as primary/unique key)
BEGIN;
SELECT * FROM student WHERE studentid = ... FOR UPDATE;
-- lock the student record
DELETE FROM schedule WHERE studentid = ... ;
INSERT INTO schedule (studentid, ...) VALUES (... );
INSERT INTO schedule (studentid, ...) VALUES (... );
COMMIT;
This will not lock the whole schedule table, but only one row of the student
table.
I hope that helps.
Best Regards,
Michael Paesold
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend