Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors - Mailing list pgsql-hackers

From Fabien COELHO
Subject Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
Date
Msg-id alpine.DEB.2.20.1707031430380.15247@lancre
Whole thread Raw
In response to Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors  (Marina Polyakova <m.polyakova@postgrespro.ru>)
Responses Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
List pgsql-hackers
>>>> The number of retries and maybe failures should be counted, maybe with
>>>> some adjustable maximum, as suggested.
>>> 
>>> If we fix the maximum number of attempts the maximum number of failures 
>>> for one script execution will be bounded above 
>>> (number_of_transactions_in_script * maximum_number_of_attempts). Do you 
>>> think we should make the option in program to limit this number much more?
>> 
>> Probably not. I think that there should be a configurable maximum of
>> retries on a transaction, which may be 0 by default if we want to be
>> upward compatible with the current behavior, or maybe something else.
>
> I propose the option --max-attempts-number=NUM which NUM cannot be less than 
> 1. I propose it because I think that, for example, --max-attempts-number=100 
> is better than --max-retries-number=99. And maybe it's better to set its 
> default value to 1 too because retrying of shell commands can produce new 
> errors..

Personnaly, I like counting retries because it also counts the number of 
time the transaction actually failed for some reason. But this is a 
marginal preference, and one can be switchted to the other easily.

-- 
Fabien.



pgsql-hackers by date:

Previous
From: Michael Paquier
Date:
Subject: Re: [HACKERS] WIP patch for avoiding duplicate initdb runs during"make check"
Next
From: Alvaro Herrera
Date:
Subject: Re: [HACKERS] WIP patch for avoiding duplicate initdb runs during"make check"