Re: how to avoid deadlock on masive update with multiples delete - Mailing list pgsql-performance

From Anibal David Acosta
Subject Re: how to avoid deadlock on masive update with multiples delete
Date
Msg-id 011201cda327$f99d73b0$ecd85b10$@devshock.com
Whole thread Raw
In response to Re: how to avoid deadlock on masive update with multiples delete  (Claudio Freire <klaussfreire@gmail.com>)
List pgsql-performance
Process 1 (massive update): update table A set column1=0, column2=0

Process 2 (multiple delete): perform delete_row(user_name, column1, column2)
from table A where user_name=YYY

The pgsql function delete_row delete the row and do other business logic not
related to table A.



-----Mensaje original-----
De: Claudio Freire [mailto:klaussfreire@gmail.com]
Enviado el: viernes, 05 de octubre de 2012 10:27 a.m.
Para: Jeff Janes
CC: Anibal David Acosta; pgsql-performance@postgresql.org
Asunto: Re: [PERFORM] how to avoid deadlock on masive update with multiples
delete

On Thu, Oct 4, 2012 at 1:10 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
> The bulk update could take an Exclusive (not Access Exclusive) lock.
> Or the delete could perhaps be arranged to delete the records in ctid
> order (although that might still deadlock).  Or you could just repeat
> the failed transaction.

How do you make pg update/delete records, in bulk, in some particular order?

(ie, without issuing separate queries for each record)



pgsql-performance by date:

Previous
From: Andres Freund
Date:
Subject: Re: how to avoid deadlock on masive update with multiples delete
Next
From: Korisk
Date:
Subject: hash aggregation speedup