Tom Lane wrote:
SHOBHA HALDONKAR <shobha@phildigital.com> writes:
Suppose if a record is locked with update command and another
concurrent user tries to lock the same record and the first transaction
for some reason , eg. m/c hangings is not committed both the
transactions remains locked for indefinite period . I have checked the
setting in postgresql.config for deadlock_timeout which is set to 1000
. Than why doesn't the transaction deatect deadlock after 1000
millisecond .
Waiting for a client that chooses not to commit is not a deadlock.
regards, tom lane
I used to be unclear on some of this, so I think I understand your confusion. This is how it works:
The database assumes that you know what you are doing; if a transaction is not committed or rolled back, the lock remains (in 7.3 you can have transactions time out as well).
If a connection drops, and you CANNOT terminate the transaction, the DB terminates it for you (ROLLBACK).
Anywhere you have to wait for user interaction to complete an operation, AND you did a write before you started waiting for the user, you need ot find a way to break up the operation into two transactions, otherwise you WILL lock the DB (if the user goes to the restroom, or to lunch, or their keyboard breaks, ...).
My approach is usually to store in the front end or middle layer anything I need to write, and actually write to the DB after the user interaction is complete (with no delays between start and end of transaction).
Another approach is to set up the DB so that you can do your writes in multiple transactions without compromising data validity, although some times you need to run a clean-up every now and then to purge those operations that never completed.
Beyond that... Get creative ! :-)
Andrew