Re: [HACKERS] WIP aPatch: Pgbench Serialization and deadlock errors - Mailing list pgsql-hackers

From Tatsuo Ishii
Subject Re: [HACKERS] WIP aPatch: Pgbench Serialization and deadlock errors
Date
Msg-id 20210707.215016.2091144688016450280.t-ishii@gmail.com
Whole thread Raw
In response to Re: [HACKERS] WIP aPatch: Pgbench Serialization and deadlock errors  (Yugo NAGATA <nagata@sraoss.co.jp>)
Responses Re: [HACKERS] WIP aPatch: Pgbench Serialization and deadlock errors  (Yugo NAGATA <nagata@sraoss.co.jp>)
List pgsql-hackers
>> Well, "that's very little, let's ignore it" is not technically a right
>> direction IMO.
> 
> Hmmm, It seems to me these failures are ignorable because with regard to failures
> due to -T they occur only the last transaction of each client and do not affect
> the result such as TPS and latency of successfully processed transactions.
> (although I am not sure for what sense you use the word "technically"...)

"My application button does not respond once in 100 times. It's just
1% error rate. You should ignore it." I would say this attitude is not
technically correct.

> However, maybe I am missing something. Could you please tell me what do you think
> the actual harm for users about failures due to -D is?

I don't know why you are referring to -D.

>> That's necessarily true in practice. By the time when -T is about to
>> expire, transactions are all finished in finite time as you can see
>> the result I showed. So it's reasonable that the very last cycle of
>> the benchmark will finish in finite time as well.
> 
> Your script may finish in finite time, but others may not.

That's why I said "practically". In other words "in most cases the
scenario will finish in finite time".

> Indeed, it is possible an execution of a query takes a long or infinite
> time. However, its cause would a problematic query in the custom script
> or other problems occurs on the server side. These are not problem of
> pgbench and, pgbench itself can't control either. On the other hand, the
> unlimited number of tries is a behaviours specified by the pgbench option,
> so I think pgbench itself should internally avoid problems caused from its
> behaviours. That is, if max-tries=0 could cause infinite or much longer
> benchmark time more than user expected due to too many retries, I think
> pgbench should avoid it.

I would say that's user's responsibility to avoid infinite running
benchmarking. Remember, pgbench is a tool for serious users, not for
novice users.

Or, we should terminate the last cycle of benchmark regardless it is
retrying or not if -T expires. This will make pgbench behaves much
more consistent.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp



pgsql-hackers by date:

Previous
From: Jim Mlodgenski
Date:
Subject: Re: Hook for extensible parsing.
Next
From: Fujii Masao
Date:
Subject: Re: track_planning causing performance regression