Re: [HACKERS] WIP aPatch: Pgbench Serialization and deadlock errors - Mailing list pgsql-hackers

From Tatsuo Ishii
Subject Re: [HACKERS] WIP aPatch: Pgbench Serialization and deadlock errors
Date
Msg-id 20210707.161123.574070522694073225.t-ishii@gmail.com
Whole thread Raw
In response to Re: [HACKERS] WIP aPatch: Pgbench Serialization and deadlock errors  (Yugo NAGATA <nagata@sraoss.co.jp>)
Responses Re: [HACKERS] WIP aPatch: Pgbench Serialization and deadlock errors  (Yugo NAGATA <nagata@sraoss.co.jp>)
List pgsql-hackers
> Indeed, as Ishii-san pointed out, some users might not want to terminate
> retrying transactions due to -T. However, the actual negative effect is only
> printing the number of failed transactions. The other result that users want to
> know, such as tps, are almost not affected because they are measured for
> transactions processed successfully. Actually, the percentage of failed
> transaction is very little, only 0.347%.

Well, "that's very little, let's ignore it" is not technically a right
direction IMO.

> In the existing behaviour, running transactions are never terminated due to
> the -T option. However, ISTM that this would be based on an assumption
> that a latency of each transaction is small and that a timing when we can
> finish the benchmark would come soon.  On the other hand, when transactions can 
> be retried unlimitedly, it may take a long time more than expected, and we can
> not guarantee that this would finish successfully in limited time.Therefore,  
> terminating the benchmark by giving up to retry the transaction after time
> expiration seems reasonable under unlimited retries.

That's necessarily true in practice. By the time when -T is about to
expire, transactions are all finished in finite time as you can see
the result I showed. So it's reasonable that the very last cycle of
the benchmark will finish in finite time as well.

Of course if a benchmark cycle takes infinite time, this will be a
problem. However same thing can be said to non-retry
benchmarks. Theoretically it is possible that *one* benchmark cycle
takes forever. In this case the only solution will be just hitting ^C
to terminate pgbench. Why can't we have same assumption with
--max-tries=0 case?

> In the sense that we don't
> terminate running transactions forcibly, this don't change the existing behaviour. 

This statement seems to be depending on your perosnal assumption.

I still don't understand why you think that --max-tries non 0 case
will *certainly* finish in finite time whereas --max-tries=0 case will
not.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp



pgsql-hackers by date:

Previous
From: "jianggq@fujitsu.com"
Date:
Subject: unexpected data loaded into database when used COPY FROM
Next
From: Kyotaro Horiguchi
Date:
Subject: Re: 回复: Why is XLOG_FPI_FOR_HINT always need backups?