Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors - Mailing list pgsql-hackers

From Fabien COELHO
Subject Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
Date
Msg-id alpine.DEB.2.20.1803292134380.16472@lancre
Whole thread Raw
In response to Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors  (Teodor Sigaev <teodor@sigaev.ru>)
Responses Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
List pgsql-hackers
> Conception of max-retry option seems strange for me. if number of retries 
> reaches max-retry option, then we just increment counter of failed 
> transaction and try again (possibly, with different random numbers). At the 
> end we should distinguish number of error transaction and failed transaction, 
> to found this difference documentation  suggests to rerun pgbench with 
> debugging on.
>
> May be I didn't catch an idea, but it seems to me max-tries should be 
> removed. On transaction searialization or deadlock error pgbench should 
> increment counter of failed transaction, resets conditional stack, variables, 
> etc but not a random generator and then start new transaction for the first 
> line of script.

ISTM that there is the idea is that the client application should give up 
at some point are report an error to the end user, kind of a "timeout" on 
trying, and that max-retry would implement this logic of giving up: the 
transaction which was intented, represented by a given initial random 
generator state, could not be committed as if after some iterations.

Maybe the max retry should rather be expressed in time rather than number 
of attempts, or both approach could be implemented? But there is a logic 
of retrying the same (try again what the client wanted) vs retrying 
something different (another client need is served).

-- 
Fabien.


pgsql-hackers by date:

Previous
From: David Steele
Date:
Subject: Re: pgsql: Add documentation for the JIT feature.
Next
From: legrand legrand
Date:
Subject: Poc: pg_stat_statements with planid