Re: [HACKERS] WIP aPatch: Pgbench Serialization and deadlock errors - Mailing list pgsql-hackers

From Yugo NAGATA
Subject Re: [HACKERS] WIP aPatch: Pgbench Serialization and deadlock errors
Date
Msg-id 20210707224642.4e8265865fda4e118c4de5ee@sraoss.co.jp
Whole thread Raw
In response to Re: [HACKERS] WIP aPatch: Pgbench Serialization and deadlock errors  (Tatsuo Ishii <ishii@sraoss.co.jp>)
Responses Re: [HACKERS] WIP aPatch: Pgbench Serialization and deadlock errors  (Fabien COELHO <coelho@cri.ensmp.fr>)
List pgsql-hackers
On Wed, 07 Jul 2021 21:50:16 +0900 (JST)
Tatsuo Ishii <ishii@sraoss.co.jp> wrote:

> >> Well, "that's very little, let's ignore it" is not technically a right
> >> direction IMO.
> > 
> > Hmmm, It seems to me these failures are ignorable because with regard to failures
> > due to -T they occur only the last transaction of each client and do not affect
> > the result such as TPS and latency of successfully processed transactions.
> > (although I am not sure for what sense you use the word "technically"...)
> 
> "My application button does not respond once in 100 times. It's just
> 1% error rate. You should ignore it." I would say this attitude is not
> technically correct.

I cannot understand what you want to say. Reporting the number of transactions 
that is failed intentionally can be treated as same as he error rate on your
application's button?

> > However, maybe I am missing something. Could you please tell me what do you think
> > the actual harm for users about failures due to -D is?
> 
> I don't know why you are referring to -D.

Sorry. It's just a typo as you can imagine.
I am asking you what do you think the actual harm for users due to termination of
retrying by the -T option is.

> >> That's necessarily true in practice. By the time when -T is about to
> >> expire, transactions are all finished in finite time as you can see
> >> the result I showed. So it's reasonable that the very last cycle of
> >> the benchmark will finish in finite time as well.
> > 
> > Your script may finish in finite time, but others may not.
> 
> That's why I said "practically". In other words "in most cases the
> scenario will finish in finite time".

Sure.

> > Indeed, it is possible an execution of a query takes a long or infinite
> > time. However, its cause would a problematic query in the custom script
> > or other problems occurs on the server side. These are not problem of
> > pgbench and, pgbench itself can't control either. On the other hand, the
> > unlimited number of tries is a behaviours specified by the pgbench option,
> > so I think pgbench itself should internally avoid problems caused from its
> > behaviours. That is, if max-tries=0 could cause infinite or much longer
> > benchmark time more than user expected due to too many retries, I think
> > pgbench should avoid it.
> 
> I would say that's user's responsibility to avoid infinite running
> benchmarking. Remember, pgbench is a tool for serious users, not for
> novice users.

Of course, users themselves should be careful of problematic script, but it
would be better that pgbench itself avoids problems if pgbench can beforehand.
 
> Or, we should terminate the last cycle of benchmark regardless it is
> retrying or not if -T expires. This will make pgbench behaves much
> more consistent.

Hmmm, indeed this might make the behaviour a bit consistent, but I am not
sure such behavioural change benefit users.

Regards,
Yugo Nagata

-- 
Yugo NAGATA <nagata@sraoss.co.jp>



pgsql-hackers by date:

Previous
From: Dean Rasheed
Date:
Subject: Re: [PATCH] expand the units that pg_size_pretty supports on output
Next
From: Japin Li
Date:
Subject: Why ALTER SUBSCRIPTION ... SET (slot_name='none') requires subscription disabled?