Re: database-level lockdown - Mailing list pgsql-general

From Filipe Pina
Subject Re: database-level lockdown
Date
Msg-id 271401C5-E8DD-4B27-8C27-7FB0DB9617C2@impactzero.pt
Whole thread Raw
In response to Re: database-level lockdown  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: database-level lockdown  (Filipe Pina <filipe.pina@impactzero.pt>)
List pgsql-general
Exactly, that’s why there’s a limit on the retry number. On the last try I wanted something like full lockdown to make
surethe transaction will not fail due to serialiazation failure (if no other processes are touching the database, it
can’thappen). 

So if two transactions were retrying over and over, the first one to reach max_retries would activate that “global
lock”making the other one wait and then the second one would also be able to successfully commit... 

> On 11/06/2015, at 20:27, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> Filipe Pina <filipe.pina@impactzero.pt> writes:
>> It will try 5 times to execute each instruction (in case of
>> OperationError) and in the last one it will raise the last error it
>> received, aborting.
>
>> Now my problem is that aborting for the last try (on a restartable
>> error - OperationalError code 40001) is not an option... It simply
>> needs to get through, locking whatever other processes and queries it
>> needs.
>
> I think you need to reconsider your objectives.  What if two or more
> transactions are repeatedly failing and retrying, perhaps because they
> conflict?  They can't all forcibly win.
>
>             regards, tom lane



pgsql-general by date:

Previous
From: Manuel Kniep
Date:
Subject: cached row type not invalidated after DDL change
Next
From: sym39
Date:
Subject: BDR: Node join and leave questions