Thread: Insert behavior in transaction

Insert behavior in transaction

From
"Andres"
Date:
Hi

I'm having a problem that looks like a dead lock.

I have a batch that performs simple inserts over a single table.
The only restriction that this table have is that one of its
fields must be unique.

There are many of those batches running concurrently.

I want the batch to fail and discard any changes if any error
occurs.

Sounds very easy, at the beginning of the batch I execute BEGIN,
then the inserts described above, and then COMMIT. If a problem
ocurrs ROLLBACK is executed.

It works great on a single batch, but when there are many
batches running concurrenly and those batches have problems
(inserts the same value in the field that is supposed to be
unique) the processes freezes and none get commited.

How can I fix it ???

I thought the expected behavior should be other.
I did this simple test and it fails too.

BEGIN (first transaction)
INSERT INTO mytable VALUES(1);

On other client
BEGIN (second transaction)
INSERT INTO mytable VALUES(0);
INSERT INTO mytable VALUES(1);

and it freezes waiting for the first o commit or rollback

afterwards executes
INSERT INTO mytable VALUES(0);

and, there you go, a dead lock.

It is using READ COMMITED transaction isolation level that is
the postgres default and also tried using SERIALIZABLE

Shouldn't the second client accepts the insert IN the
transaction and the same for the first client since the
transaction only can see changes made before the start of the
transaction?
That way every client can finish its operations and at the
commit time, still can fail if the other transaction commited
first.

However, that for sure could be a misunderstanding of
transaction behavior in SQL92.     But how can I fix it ??

I don't care wich transaction commits the changes,  but I want,
at least (and at most) one of them to finish.

Thanks in advance.

Andres


Re: Insert behavior in transaction

From
Tom Lane
Date:
"Andres" <andres1981@gawab.com> writes:
> I did this simple test and it fails too.

> BEGIN (first transaction)
> INSERT INTO mytable VALUES(1);

> On other client
> BEGIN (second transaction)
> INSERT INTO mytable VALUES(0);
> INSERT INTO mytable VALUES(1);

> and it freezes waiting for the first o commit or rollback

Not too surprising if the second insert would imply a unique-key
violation.  It has to wait to see if the first insertion of "1"
is going to commit or not.

            regards, tom lane

Re: Insert behavior in transaction

From
"Andres"
Date:
But, does it have solution ??

You are right, the second insert imply a unique-key violation,
but what to do in that cases?

Thanks for your help and attention!


Tom Lane writes:

> "Andres" <andres1981@gawab.com> writes:
>> I did this simple test and it fails too.
>
>> BEGIN (first transaction)
>> INSERT INTO mytable VALUES(1);
>
>> On other client
>> BEGIN (second transaction)
>> INSERT INTO mytable VALUES(0);
>> INSERT INTO mytable VALUES(1);
>
>> and it freezes waiting for the first o commit or rollback
>
> Not too surprising if the second insert would imply a unique-key
> violation.  It has to wait to see if the first insertion of "1"
> is going to commit or not.
>
>             regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: the planner will ignore your desire to choose an index scan if your
>       joining column's datatypes do not match