Hi
I'm having a problem that looks like a dead lock.
I have a batch that performs simple inserts over a single table.
The only restriction that this table have is that one of its
fields must be unique.
There are many of those batches running concurrently.
I want the batch to fail and discard any changes if any error
occurs.
Sounds very easy, at the beginning of the batch I execute BEGIN,
then the inserts described above, and then COMMIT. If a problem
ocurrs ROLLBACK is executed.
It works great on a single batch, but when there are many
batches running concurrenly and those batches have problems
(inserts the same value in the field that is supposed to be
unique) the processes freezes and none get commited.
How can I fix it ???
I thought the expected behavior should be other.
I did this simple test and it fails too.
BEGIN (first transaction)
INSERT INTO mytable VALUES(1);
On other client
BEGIN (second transaction)
INSERT INTO mytable VALUES(0);
INSERT INTO mytable VALUES(1);
and it freezes waiting for the first o commit or rollback
afterwards executes
INSERT INTO mytable VALUES(0);
and, there you go, a dead lock.
It is using READ COMMITED transaction isolation level that is
the postgres default and also tried using SERIALIZABLE
Shouldn't the second client accepts the insert IN the
transaction and the same for the first client since the
transaction only can see changes made before the start of the
transaction?
That way every client can finish its operations and at the
commit time, still can fail if the other transaction commited
first.
However, that for sure could be a misunderstanding of
transaction behavior in SQL92. But how can I fix it ??
I don't care wich transaction commits the changes, but I want,
at least (and at most) one of them to finish.
Thanks in advance.
Andres