Re: Avoiding deadlocks when performing bulk update and delete operations - Mailing list pgsql-general

From Sanjaya Vithanagama
Subject Re: Avoiding deadlocks when performing bulk update and delete operations
Date
Msg-id CAMbKYynFYfjhys3p2-X1OgZsKdFnXmb5-FWUMyx_KCR24xs-rg@mail.gmail.com
Whole thread Raw
In response to Re: Avoiding deadlocks when performing bulk update and delete operations  (Bill Moran <wmoran@potentialtech.com>)
Responses Re: Avoiding deadlocks when performing bulk update and delete operations
List pgsql-general


On Wed, Nov 26, 2014 at 11:47 AM, Bill Moran <wmoran@potentialtech.com> wrote:
On Wed, 26 Nov 2014 10:41:56 +1100
Sanjaya Vithanagama <svithanagama@gmail.com> wrote:
>
> > * How frequently do deadlocks occur?
>
> We are seeing deadlocks about 2-3 times per day in the production server.
> To reproduce the problem easily we've written a simple Java class with
> multiple threads calling to the stored procedures running the above queries
> inside a loop. This way we can easily recreate a scenario that happens in
> the production.

Don't overcomplicate your solution. Adjust your code to detect the deadlock
and replay the transaction when it happens. At 2-3 deadlocks per day, it's
difficult to justify any other solution (as any other solution would be
more time-consuming to implement, AND would interfere with performance).

When you say replay the transaction, I believe that is to catch the exception inside the stored procedure? We've considered that option at one state but, the problem with that is we don't have enough context information at the stored procedure where this deadlock occurs. 
 

I've worked with a number of write-heavy applications that experienced
deadlocks, some of them on the order of hundreds of deadlocks per day.
In some cases, you can adjust the queries to reduce the incidence of
deadlocks, or eliminate the possibility of deadlocks completely.  The
situation that you describe is not one of those cases, as the planner
can choose to lock rows in whatever order it thinks it most efficient
and you don't have direct control over that.

The performance hit you'll take 2-3 times a day when a statement has to
be replayed due to deadlock will hardly be noticed (although a statement
that takes 50 seconds will cause raised eyebrows if it runs 2x) but that
will only happen 2-3 times a day, and the solution I'm proposing won't
have any performance impact on the other 13000000 queries per day that
don't deadlock.

2-3 deadlocks per day is normal operation for a heavily contented table,
in my experience.

Given that we have no control over how Postgres performs delete and update operations, the only other possibility seems to be to partition this table by id_A (so that the individual tables will never be deadlocked). But that seems to be a too extreme end option at this stage.

 

--
Bill Moran
I need your help to succeed:
http://gamesbybill.com



--
Sanjaya

pgsql-general by date:

Previous
From: Patrick Krecker
Date:
Subject: Re: is there a warm standby sync trigger?
Next
From: Sameer Kumar
Date:
Subject: Re: is there a warm standby sync trigger?