Re: Avoiding deadlocks when performing bulk update and delete operations - Mailing list pgsql-general

From Bill Moran
Subject Re: Avoiding deadlocks when performing bulk update and delete operations
Date
Msg-id 20141127064918.aaabdabda467bdb53f67fe43@potentialtech.com
Whole thread Raw
In response to Re: Avoiding deadlocks when performing bulk update and delete operations  (Sanjaya Vithanagama <svithanagama@gmail.com>)
Responses Re: Avoiding deadlocks when performing bulk update and delete operations
List pgsql-general
On Thu, 27 Nov 2014 15:07:49 +1100
Sanjaya Vithanagama <svithanagama@gmail.com> wrote:

> On Wed, Nov 26, 2014 at 11:47 AM, Bill Moran <wmoran@potentialtech.com>
> wrote:
>
> > On Wed, 26 Nov 2014 10:41:56 +1100
> > Sanjaya Vithanagama <svithanagama@gmail.com> wrote:
> > >
> > > > * How frequently do deadlocks occur?
> > >
> > > We are seeing deadlocks about 2-3 times per day in the production server.
> > > To reproduce the problem easily we've written a simple Java class with
> > > multiple threads calling to the stored procedures running the above
> > queries
> > > inside a loop. This way we can easily recreate a scenario that happens in
> > > the production.
> >
> > Don't overcomplicate your solution. Adjust your code to detect the deadlock
> > and replay the transaction when it happens. At 2-3 deadlocks per day, it's
> > difficult to justify any other solution (as any other solution would be
> > more time-consuming to implement, AND would interfere with performance).
>
> When you say replay the transaction, I believe that is to catch the
> exception inside the stored procedure? We've considered that option at one
> state but, the problem with that is we don't have enough context
> information at the stored procedure where this deadlock occurs.

Why not catch it in the application calling the stored procedure?

I don't understand how you could not have enough context to run the command
you were just trying to run. Can you elaborate on what you mean by that?

> > I've worked with a number of write-heavy applications that experienced
> > deadlocks, some of them on the order of hundreds of deadlocks per day.
> > In some cases, you can adjust the queries to reduce the incidence of
> > deadlocks, or eliminate the possibility of deadlocks completely.  The
> > situation that you describe is not one of those cases, as the planner
> > can choose to lock rows in whatever order it thinks it most efficient
> > and you don't have direct control over that.
> >
> > The performance hit you'll take 2-3 times a day when a statement has to
> > be replayed due to deadlock will hardly be noticed (although a statement
> > that takes 50 seconds will cause raised eyebrows if it runs 2x) but that
> > will only happen 2-3 times a day, and the solution I'm proposing won't
> > have any performance impact on the other 13000000 queries per day that
> > don't deadlock.
> >
> > 2-3 deadlocks per day is normal operation for a heavily contented table,
> > in my experience.
>
> Given that we have no control over how Postgres performs delete and update
> operations, the only other possibility seems to be to partition this table
> by id_A (so that the individual tables will never be deadlocked). But that
> seems to be a too extreme end option at this stage.

That would be overcomplicating the solution, and almost certainly won't work
anyway. If you're getting deadlocks, it's because two processes are trying
to modify the same rows. Even if you partition, those same rows will be on
the same partition, so you'll still deadlock.

--
Bill Moran
I need your help to succeed:
http://gamesbybill.com


pgsql-general by date:

Previous
From: Andreas Kretschmer
Date:
Subject: Re: High Availability Cluster
Next
From: Alexis
Date:
Subject: Re: How to avoid a GIN recheck condition