Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors - Mailing list pgsql-hackers

From Marina Polyakova
Subject Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
Date
Msg-id 92869f967d690848c456ac44b371482c@postgrespro.ru
Whole thread Raw
In response to Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors  (Fabien COELHO <coelho@cri.ensmp.fr>)
Responses Re: [HACKERS] WIP Patch: Pgbench Serialization and deadlock errors
List pgsql-hackers
On 15-08-2018 11:50, Fabien COELHO wrote:
> Hello Marina,

Hello!

>> v10-0004-Pgbench-errors-and-serialization-deadlock-retrie.patch
>> - the main patch for handling client errors and repetition of 
>> transactions with serialization/deadlock failures (see the detailed 
>> description in the file).
> 
> Patch applies cleanly.
> 
> It allows retrying a script (considered as a transaction) on
> serializable and deadlock errors, which is a very interesting
> extension but also impacts pgbench significantly.
> 
> I'm waiting for the feature to be right before checking in full the
> documentation and tests. There are still some issues to resolve before
> checking that.
> 
> Anyway, tests look reasonable. Taking advantage of of transactions
> control from PL/pgsql is a good use of this new feature.

:-)

> A few comments about the doc.
> 
> According to the documentation, the feature is triggered by --max-tries 
> and
> --latency-limit. I disagree with the later, because it means that 
> having
> latency limit without retrying is not supported anymore.
> 
> Maybe you can allow an "unlimited" max-tries, say with special value 
> zero,
> and the latency limit does its job if set, over all tries.
> 
> Doc: "error in meta commands" -> "meta command errors", for homogeneity 
> with
> other cases?
> ...
> Doc: "never occur.." -> "never occur", or eventually "...".
> 
> Doc: "Directly client errors" -> "Direct client errors".
> ...
> inTransactionBlock: I disagree with any function other than doCustom 
> changing
> the client state, because it makes understanding the state machine 
> harder. There
> is already one exception to that (threadRun) that I wish to remove. All 
> state
> changes must be performed explicitely in doCustom.
> ...
> PQexec("ROOLBACK"): you are inserting a synchronous command, for which 
> the
> thread will have to wait for the result, in a middle of a framework 
> which
> takes great care to use only asynchronous stuff so that one thread can
> manage several clients efficiently. You cannot call PQexec there.
> From where I sit, I'd suggest to sendQuery("ROLLBACK"), then switch to
> a new state CSTATE_WAIT_ABORT_RESULT which would be similar to
> CSTATE_WAIT_RESULT, but on success would skip to RETRY or ABORT instead
> of proceeding to the next command.
> ...
>   memcpy(&(st->retry_state.random_state), &(st->random_state),
> sizeof(RandomState));
> 
> Is there a problem with "st->retry_state.random_state = 
> st->random_state;"
> instead of memcpy? ISTM that simple assignments work in C. Idem in the 
> reverse
> copy under RETRY.

Thank you, I'll fix this.

> Detailed -r report. I understand from the doc that the retry number on 
> the
> detailed per-statement report is to identify at what point errors 
> occur?
> Probably this is more or less always at the same point on a given 
> script,
> so that the most interesting feature is to report the number of retries 
> at the
> script level.

This may depend on various factors.. for example:

transaction type: pgbench_test_serialization.sql
scaling factor: 1
query mode: simple
number of clients: 2
number of threads: 1
duration: 10 s
number of transactions actually processed: 266
number of errors: 10 (3.623%)
number of serialization errors: 10 (3.623%)
number of retried: 75 (27.174%)
number of retries: 75
maximum number of tries: 2
latency average = 72.734 ms (including errors)
tps = 26.501162 (including connections establishing)
tps = 26.515082 (excluding connections establishing)
statement latencies in milliseconds, errors and retries:
          0.012           0           0  \set delta random(-5000, 5000)
          0.001           0           0  \set x1 random(1, 100000)
          0.001           0           0  \set x3 random(1, 2)
          0.001           0           0  \set x2 random(1, 1)
         19.837           0           0  UPDATE xy1 SET y = y + :delta 
WHERE x = :x1;
         21.239           5          36  UPDATE xy3 SET y = y + :delta 
WHERE x = :x3;
         21.360           5          39  UPDATE xy2 SET y = y + :delta 
WHERE x = :x2;

And you can always get the number of retries at the script level from 
the main report (if only one script is used) or from the report for each 
script (if multiple scripts are used).

> I'm still in favor of asserting that the sql connection is idle (no tx 
> in
> progress) at the beginning and/or end of a script, and report a user 
> error
> if not, instead of writing complex caveats.
> 
> If someone has a use-case for that, then maybe it can be changed, but I
> cannot see any in a benchmarking context, and I can see how easy it is
> to have a buggy script with this allowed.
> 
> I do not think that the RETRIES_ENABLED macro is a good thing. I'd 
> suggest
> to write the condition four times.

Ok!

> ISTM that "skipped" transactions are NOT "successful" so there are a 
> problem
> with comments. I believe that your formula are probably right, it has 
> more to do
> with what is "success". For cnt decomposition, ISTM that "other 
> transactions"
> are really "directly successful transactions".

I agree with you, but I also think that skipped transactions should not 
be considered errors. So we can write something like this:

All the transactions are divided into several types depending on their 
execution. Firstly, they can be divided into transactions that we 
started to execute, and transactions which were skipped (it was too late 
to execute them). Secondly, running transactions fall into 2 main types: 
is there any command that got a failure during the last execution of the 
transaction script or not? Thus

the number of all transactions =
   skipped (it was too late to execute them)
   cnt (the number of successful transactions) +
   ecnt (the number of failed transactions).

A successful transaction can have several unsuccessful tries before a
successfull run. Thus

cnt (the number of successful transactions) =
   retried (they got a serialization or a deadlock failure(s), but were
            successfully retried from the very beginning) +
   directly successfull transactions (they were successfully completed on
                                      the first try).

> I'd suggest to put "ANOTHER_SQL_FAILURE" as the last option, otherwise 
> "another"
> does not make sense yet.

Maybe firstly put a general group, and then special cases?...

> I'd suggest to name it "OTHER_SQL_FAILURE".

Ok!

> In TState, field "uint32 retries": maybe it would be simpler to count 
> "tries",
> which can be compared directly to max tries set in the option?

If you mean retries in CState - on the one hand, yes, on the other hand, 
statistics always use the number of retries...

> ErrorLevel: I have already commented about in review about 10.2. I'm 
> not sure of
> the LOG -> DEBUG_FAIL changes. I do not understand the name 
> "DEBUG_FAIL", has it
> is not related to debug, they just seem to be internal errors. 
> META_ERROR maybe?

As I wrote to you in [1]:

>> I'm at odds with the proposed levels. ISTM that pgbench internal
>> errors which warrant an immediate exit should be dubbed "FATAL",
> 
> Ok!
> 
>> which
>> would leave the "ERROR" name for... errors, eg SQL errors.
>> ...
> 
> The messages of the errors in SQL and meta commands are printed only if
> the option --debug-fails is used so I'm not sure that they should have 
> a
> higher error level than main program messages (ERROR vs LOG).

Perhaps we can rename the levels DEBUG_FAIL and LOG to LOG and 
LOG_PGBENCH respectively. In this case the client error messages do not 
use debug error levels and the term "logging" is already used for 
transaction/aggregation logging... Therefore perhaps we can also combine 
the options --errors-detailed and --debug-fails into the option 
--fails-detailed=none|groups|all_messages. Here --fails-detailed=groups 
can be used to group errors in reports or logs by basic types. 
--fails-detailed=all_messages can add to this all error messages in the
SQL/meta commands, and messages for processing the failed transaction 
(its end/retry).

> The automaton skips to FAILURE on every possible error. I'm wondering 
> whether
> it could do so only on SQL errors, because other fails will lead to 
> ABORTED
> anyway? If there is no good reason to skip to FAILURE from some errors, 
> I'd
> suggest to keep the previous behavior. Maybe the good reason is to do 
> some
> counting, but this means that on eg metacommand errors now the script 
> would
> loop over instead of aborting, which does not look like a desirable 
> change
> of behavior.

Even in the case of meta command errors we must prepare for 
CSTATE_END_TX and the execution of the next script: if necessary, clear 
the conditional stack and rollback the current transaction block.

> ISTM that it would be more logical to only get into RETRY if there is a 
> retry,
> i.e. move the test RETRY/ABORT in FAILURE. For that, instead of 
> "canRetry",
> maybe you want "doRetry", which tells that a retry is possible (the 
> error
> is serializable or deadlock) and that the current parameters allow it
> (timeout, max retries).
> 
> * Minor C style comments:
> 
> if / else if / else if ... on *_FAILURE: I'd suggest a switch.
> 
> The following line removal does not seem useful, I'd have kept it:
> 
>   stats->cnt++;
>  -
>   if (skipped)
> 
> copyVariables: I'm not convinced that source_vars & nvars variables are 
> that
> useful.

>   if (!copyVariables(&st->retry_state.variables, &st->variables)) {
>     pgbench_error(LOG, "client %d aborted when preparing to execute a
> transaction\n", st->id);
> 
> The message could be more precise, eg "client %d failed while copying
> variables", unless copyVariables already printed a message. As this is 
> really
> an internal error from pgbench, I'd rather do a FATAL (direct exit) 
> there.
> ISTM that the only possible failure is OOM here, and pgbench is in a 
> very bad
> shape if it gets into that.

Ok!

> commandFailed: I'm not thrilled by the added boolean, which is 
> partially
> redundant with the second argument.

Do you mean that it is partially redundant with the argument "cmd" and, 
for example, the meta commands errors always do not cause the abortions 
of the client?

>          if (per_script_stats)
>  -               accumStats(&sql_script[st->use_file].stats, skipped,
> latency, lag);
>  +       {
>  +               accumStats(&sql_script[st->use_file].stats, skipped,
> latency, lag,
>  +                                  st->failure_status, st->retries);
>  +       }
>   }
> 
> I do not see the point of changing the style here.

If in such cases one command is placed on several lines, ISTM that the 
code is more understandable if curly brackets are used...

[1] 
https://www.postgresql.org/message-id/fcc2512cdc9e6bc49d3b489181f454da%40postgrespro.ru

-- 
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Memory leak with CALL to Procedure with COMMIT.
Next
From: Andres Freund
Date:
Subject: Re: Stored procedures and out parameters