> Conception of max-retry option seems strange for me. if number of retries
> reaches max-retry option, then we just increment counter of failed
> transaction and try again (possibly, with different random numbers). At the
> end we should distinguish number of error transaction and failed transaction,
> to found this difference documentation suggests to rerun pgbench with
> debugging on.
>
> May be I didn't catch an idea, but it seems to me max-tries should be
> removed. On transaction searialization or deadlock error pgbench should
> increment counter of failed transaction, resets conditional stack, variables,
> etc but not a random generator and then start new transaction for the first
> line of script.
ISTM that there is the idea is that the client application should give up
at some point are report an error to the end user, kind of a "timeout" on
trying, and that max-retry would implement this logic of giving up: the
transaction which was intented, represented by a given initial random
generator state, could not be committed as if after some iterations.
Maybe the max retry should rather be expressed in time rather than number
of attempts, or both approach could be implemented? But there is a logic
of retrying the same (try again what the client wanted) vs retrying
something different (another client need is served).
--
Fabien.