Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors - Mailing list pgsql-hackers

From Marina Polyakova
Subject Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
Date
Msg-id 01df6ca86c78e2c80b4e4d021c99d53a@postgrespro.ru
Whole thread Raw
In response to Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors  (Fabien COELHO <coelho@cri.ensmp.fr>)
Responses Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
List pgsql-hackers
> The current error handling is either "close connection" or maybe in
> some cases even "exit". If this is changed, then the client may
> continue execution in some unforseen state and behave unexpectedly.
> We'll see.

Thanks, now I understand this.

>>> ISTM that the retry implementation should be implemented somehow in
>>> the automaton, restarting the same script for the beginning.
>> 
>> If there are several transactions in this script - don't you think 
>> that we should restart only the failed transaction?..
> 
> On some transaction failures based on their status. My point is that
> the retry process must be implemented clearly with a new state in the
> client automaton. Exactly when the transition to this new state must
> be taken is another issue.

About it, I agree with you that it should be done in this way.

>>> The number of retries and maybe failures should be counted, maybe 
>>> with
>>> some adjustable maximum, as suggested.
>> 
>> If we fix the maximum number of attempts the maximum number of 
>> failures for one script execution will be bounded above 
>> (number_of_transactions_in_script * maximum_number_of_attempts). Do 
>> you think we should make the option in program to limit this number 
>> much more?
> 
> Probably not. I think that there should be a configurable maximum of
> retries on a transaction, which may be 0 by default if we want to be
> upward compatible with the current behavior, or maybe something else.

I propose the option --max-attempts-number=NUM which NUM cannot be less 
than 1. I propose it because I think that, for example, 
--max-attempts-number=100 is better than --max-retries-number=99. And 
maybe it's better to set its default value to 1 too because retrying of 
shell commands can produce new errors..

>>> In doLog, added columns should be at the end of the format.
>> 
>> I have inserted it earlier because these columns are not optional. Do 
>> you think they should be optional?
> 
> I think that new non-optional columns it should be at the end of the
> existing non-optional columns so that existing scripts which may
> process the output may not need to be updated.

Thanks, I agree with you :)

>>> I'm not sure that there should be an new option to report failures,
>>> the information when relevant should be integrated in a clean format
>>> into the existing reports... Maybe the "per command latency"
>>> report/option should be renamed if it becomes more general.
>> 
>> I have tried do not change other parts of program as much as possible. 
>> But if you think that it will be more useful to change the option I'll 
>> do it.
> 
> I think that the option should change if its naming becomes less
> relevant, which is to be determined. AFAICS, ISTM that new measures
> should be added to the various existing reports unconditionnaly (i.e.
> without a new option), so maybe no new option would be needed.

Thanks! I didn't think about it in this way..

-- 
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



pgsql-hackers by date:

Previous
From: Heikki Linnakangas
Date:
Subject: Re: [HACKERS] Error-like LOG when connecting with SSL for passwordauthentication
Next
From: Greg Stark
Date:
Subject: Re: [HACKERS] WIP patch for avoiding duplicate initdb runs during "make check"