Thread: WIP Patch: Pgbench Serialization and deadlock errors

WIP Patch: Pgbench Serialization and deadlock errors

From
Marina Polyakova
Date:
Hello, hackers!

This is the fourth version of the patch for pgbench. As in the previous 
one, most of the changes were made thanks to the comments of Fabien 
Coelho. Here I propose not to abort the client in pgbench in case of 
various errors including unsuccessful meta commands. As in the previous 
version, transactions with serialization and deadlock failures are 
rolled back and retried until they end successfully or their number of 
tries reaches maximum.

In details:
* Client's run is aborted only in case of a serious error, for example, 
the connection with the backend was lost. Otherwise if the execution of 
SQL or meta command fails, the client's run continues normally, except 
that the current transaction block terminates if possible.
* If a failed transaction block cannot terminate in the current script, 
its commands after the error as well as the commands of the following 
scripts are processed as usual so you can get a lot of errors of type 
"in failed SQL transaction" (when the current SQL transaction is aborted 
and commands ignored until end of transaction block). In such cases you 
can use separate statistics of these errors in all reports.

The main differences from the previous version:
* support for compound commands (as I understand, they are not properly 
supported in the original version, since the execution status is 
processed only for the first subcommand). Note that:
** not all compound commands that contain subcommands to start or end an 
explicit transaction block can be retried. For example, you cannot use a 
compound command to complete a failed transaction block before retrying. 
Also if there're any subcommands before the explicit transaction block, 
the latter cannot be retried from the very beginning.
** if you want to know which subcommands of compound SQL commands 
failed, use the debug output.
* do not divide retries by type in all reports (you can get this 
information from the debug output).
* (because of all of the above) option --debug-fails. If you want to 
distinguish between failures or errors by type, or you want to know 
which subcommands of compound SQL commands failed, use the the pgbench 
debugging output created with the option --debug-fails or --debug. The 
first option is recommended for this purpose because with the second 
option the debugging output can be very large.
* tests for serialization/deadlock errors and retries use advisory locks 
(as I wrote here [1] other more simple methods can sometimes fail); and 
they became lighter (28 simple tests instead of 57, 8 runs of pgbench 
instead of 17).

Other smaller differences from the previous version:
* receive information from the server whether we are in the transaction 
block or not;
* naming fix (I will be very grateful if a native English speaker 
comments on this): failures are errors if they cannot be retried;
* as usual, code cleanup; in particular, use states instead of the bool 
variables to complete a failed transaction block.

Patch is attached. Any suggestions are welcome!

[1] 
https://www.postgresql.org/message-id/e4c5e8cefa4a8e88f1273b0f1ee29e56%40postgrespro.ru

-- 
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
Attachment

Re: WIP Patch: Pgbench Serialization and deadlock errors

From
Fabien COELHO
Date:
Hello Marina,

> This is the fourth version of the patch for pgbench.

Consider adding the patch to the next commitfest?

-- 
Fabien.


Re: WIP Patch: Pgbench Serialization and deadlock errors

From
Marina Polyakova
Date:
On 19-12-2017 16:52, Fabien COELHO wrote:
> Hello Marina,
> 
>> This is the fourth version of the patch for pgbench.
> 
> Consider adding the patch to the next commitfest?

Hi! Yes, here it is: https://commitfest.postgresql.org/16/1420/

-- 
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: WIP Patch: Pgbench Serialization and deadlock errors

From
Fabien COELHO
Date:
>> Consider adding the patch to the next commitfest?
>
> Hi! Yes, here it is: https://commitfest.postgresql.org/16/1420/

I think you may have to ask the cf app admin to remove the duplicate.

https://commitfest.postgresql.org/16/1419/

-- 
Fabien.


Re: WIP Patch: Pgbench Serialization and deadlock errors

From
Marina Polyakova
Date:
On 19-12-2017 17:11, Fabien COELHO wrote:
>>> Consider adding the patch to the next commitfest?
>> 
>> Hi! Yes, here it is: https://commitfest.postgresql.org/16/1420/
> 
> I think you may have to ask the cf app admin to remove the duplicate.
> 
> https://commitfest.postgresql.org/16/1419/

Thanks, I'm trying to do this..

-- 
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: WIP Patch: Pgbench Serialization and deadlock errors

From
Marina Polyakova
Date:
On 19-12-2017 17:22, Marina Polyakova wrote:
> On 19-12-2017 17:11, Fabien COELHO wrote:
>>>> Consider adding the patch to the next commitfest?
>>> 
>>> Hi! Yes, here it is: https://commitfest.postgresql.org/16/1420/
>> 
>> I think you may have to ask the cf app admin to remove the duplicate.
>> 
>> https://commitfest.postgresql.org/16/1419/
> 
> Thanks, I'm trying to do this..

The second one was removed, so 
https://commitfest.postgresql.org/16/1419/ is a working one.

-- 
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: WIP Patch: Pgbench Serialization and deadlock errors

From
Fabien COELHO
Date:
Hello Marina,

Some comment about WIP pgbench error handling v4.

Patch applies, compiles, global & local "make check" are ok. doc compiles.

I'm generally okay with having such a feature, but I'd like it to be 
*MUCH* simpler, otherwise it is going to make pgbench unmaintainable:-(


Also, ISTM that a new patch should address somehow (in the code or with 
explanation about why you disagree) all comments from the previous review, 
which is not really the case here, as I have to repeat some of the 
comments I did on v3. You should answer to the previous comment mail and 
tag all comments with "ok", "no because this or that", "done", "postponed 
because this or that...".


About the documentation:

Again, as already said in the previous review, please take into account comments
or justify why you do not do so, I do not think that this feature should be
given any pre-emminence: most of the time performance testing is about all-is-
well transactions which do not display any error. I do not claim that it is not
a useful feature, on the contrary I do think that testing under error conditions
is a good capability, but I just insist that it is a on the side feature
should not be put forward. As a consequence, the "maximum number of transaction
tries" should certainly not be the default first output of a summary run.

I'm unclear about the added example added in the documentation. There
are 71% errors, but 100% of transactions are reported as processed. If
there were errors, then it is not a success, so the transaction were not
processed? To me it looks inconsistent. Also, while testing, it seems that
failed transactions are counted in tps, which I think is not appropriate:


About the feature:

  sh> PGOPTIONS='-c default_transaction_isolation=serializable' \
        ./pgbench -P 1 -T 3 -r -M prepared -j 2 -c 4
  starting vacuum...end.
  progress: 1.0 s, 10845.8 tps, lat 0.091 ms stddev 0.491, 10474 failed
  # NOT 10845.8 TPS...
  progress: 2.0 s, 10534.6 tps, lat 0.094 ms stddev 0.658, 10203 failed
  progress: 3.0 s, 10643.4 tps, lat 0.096 ms stddev 0.568, 10290 failed
  ...
  number of transactions actually processed: 32028 # NO!
  number of errors: 30969 (96.694 %)
  latency average = 2.833 ms
  latency stddev = 1.508 ms
  tps = 10666.720870 (including connections establishing) # NO
  tps = 10683.034369 (excluding connections establishing) # NO
  ...

For me this is all wrong. I think that the tps report is about transactions
that succeeded, not mere attempts. I cannot say that a transaction which aborted
was "actually processed"... as it was not.

The order of reported elements is not logical:

  maximum number of transaction tries: 100
  scaling factor: 10
  query mode: prepared
  number of clients: 4
  number of threads: 2
  duration: 3 s
  number of transactions actually processed: 967
  number of errors: 152 (15.719 %)
  latency average = 9.630 ms
  latency stddev = 13.366 ms
  number of transactions retried: 623 (64.426 %)
  number of retries: 32272

I would suggest to group everything about error handling in one block,
eg something like:

  scaling factor: 10
  query mode: prepared
  number of clients: 4
  number of threads: 2
  duration: 3 s
  number of transactions actually processed: 967
  number of errors: 152 (15.719 %)
  number of transactions retried: 623 (64.426 %)
  number of retries: 32272
  maximum number of transaction tries: 100
  latency average = 9.630 ms
  latency stddev = 13.366 ms

Also, percent character should be stuck to its number: 15.719% to have the 
style more homogeneous (although there seems to be pre-existing 
inhomogeneities).

I would replace "transaction tries/retried" by "tries/retried", everything
is about transactions in the report anyway.

Without reading the documentation, the overall report semantics is unclear,
especially given the absurd tps results I got with the my first attempt,
as failing transactions are counted as "processed".

For the detailed report, ISTM that I already said in the previous review that
the 22 columns (?) indentation is much too large:

  0.078                   149                 30886  UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid
=:tid;
 

When giving the number of retries, I'd like to also have the average number of
retried per attempted transactions, whether they succeeded or failed in the end.


About the code:

I'm at lost with the 7 states added to the automaton, where I would have hoped
that only 2 (eg RETRY & FAIL, or even less) would be enough.

I would hope for something like "if (some error) state = ERROR",
then in "ERROR" do the stats, check whether it should be retried, and if
so state = START... and we are done.

I'm wondering whether the whole feature could be simplified by considering 
that one script is one "transaction" (it is from the report point of view 
at least), and that any retry is for the full script only, from its 
beginning. That would remove the trying to guess at transactions begin or 
end, avoid scanning manually for subcommands, and so on.
  - Would it make sense?
  - Would it be ok for your use case?
The proposed version of the code looks unmaintainable to me. There are
3 levels of nested "switch/case" with state changes at the deepest level.
I cannot even see it on my screen which is not wide enough.

There should be a typedef for "random_state", eg something like:

   typedef struct { unsigned short data[3]; } RandomState;

Please keep "const" declarations, eg "commandFailed".

I think that choosing script should depend on the thread random state, not
the client random state, so that a run would generate the same pattern per
thread, independently of which client finishes first.

I'm sceptical of the "--debug-fails" options. ISTM that --debug is already there
and should just be reused. Also, you have changed some existing error unrelated
error messages with this option, especially in evalFunc, which is clearly not
appropriate:

  -    fprintf(stderr, "random range is too large\n");
  +    if (debug_fails)
  +        fprintf(stderr, "random range is too large\n");

Please let unrelated code as is.

I agree that function naming style is a already a mess, but I think that
new functions you add should use a common style, eg "is_compound" vs
"canRetry".

Translating error strings to their enum should be put in a function.

I do not believe in the normalize_whitespace function: ISTM that it
would turn  "SELECT LENGTH('    ');" to "SELECT LENGTH(' ');", which
is not desirable. I do not think that it should be needed.

I do not understand the "get subcommands of a compound command" strange
re-parsing phase. There should be comments explaining what you want to
achieve. I'm not sure about what happens if several begin/end appears
on the line. I'm not sure this whole thing should be done anyway.


About the tests:

The 828 lines perl script seems over complicated, with pgbench & psql
interacting together... Is all that really necessary? Isn't some (much) simpler
test possible and would be sufficient?

The "node" is started but never stopped.

For file contents, maybe the << 'EOF' here-document syntax would help instead
of using concatenated backslashed strings everywhere.

-- 
Fabien.


Re: WIP Patch: Pgbench Serialization and deadlock errors

From
Marina Polyakova
Date:
On 03-01-2018 19:28, Fabien COELHO wrote:
> Hello Marina,

Hello!

> Some comment about WIP pgbench error handling v4.

Many thanks!

> Patch applies, compiles, global & local "make check" are ok. doc 
> compiles.
> 
> I'm generally okay with having such a feature, but I'd like it to be
> *MUCH* simpler, otherwise it is going to make pgbench
> unmaintainable:-(

I understand your concern about this, but what exactly do you want to 
simplify (except for compound commands, the option --debug-fails and 
processing the end of the failed transaction block, because you discuss 
them later)?

> Also, ISTM that a new patch should address somehow (in the code or
> with explanation about why you disagree) all comments from the
> previous review, which is not really the case here, as I have to
> repeat some of the comments I did on v3. You should answer to the
> previous comment mail and tag all comments with "ok", "no because this
> or that", "done", "postponed because this or that...".

1) You can read my answer to your previous review [1]:
- if there's a short answer like "Ok!", "Thanks!", "Sorry, thanks", 
"Thanks, I agree" or "Thanks, I'll fix it", it means "done";
- if there's an open question, it means that it is postponed because I 
don't know exactly how to do it.

2) The second note of the --max-tries documentation. I wrote you a 
suggestion in [1] and included it in the patch because you did not 
answer me about it:

>> I do not understand the second note of the --max-tries documentation.
>> It seems to suggest that some script may not end their own
>> transaction...
>> which should be an error in my opinion? Some explanations would be
>> welcome.
> 
> As you told me here [1], "I disagree about exit in ParseScript if the
> transaction block is not completed <...> and would break an existing
> feature.". Maybe it's be better to say this:
> 
> In pgbench you can use scripts in which the transaction blocks do not
> end. Be careful in this case because transactions that span over more
> than one script are not rolled back and will not be retried in case of
> an error. In such cases, the script in which the error occurred is
> reported as failed.
> 
> ?

3) About the tests. I wrote to you why they are so heavy in [1]:

>> The tap tests seems over-complicated and heavy with two pgbench run in
>> parallel... I'm not sure we really want all that complexity for this
>> somehow small feature. Moreover pgbench can run several scripts, I'm
>> not
>> sure why two pgbench would need to be invoked. Could something much
>> simpler and lighter be proposed instead to test the feature?
> 
> Firstly, two pgbench need to be invoked because we don't know which of
> them will get a deadlock failure. Secondly, I tried much simplier tests
> but all of them failed sometimes although everything was ok:
> - tests in which pgbench runs 5 clients and 10 transactions per client
> for a serialization/deadlock failure on any client (sometimes there are
> no failures when it is expected that they will be)
> - tests in which pgbench runs 30 clients and 400 transactions per 
> client
> for a serialization/deadlock failure on any client (sometimes there are
> no failures when it is expected that they will be)
> - tests in which the psql session starts concurrently and you use sleep
> commands to wait pgbench for 10 seconds (sometimes it does not work)
> Only advisory locks help me not to get such errors in the tests :(

+ as I wrote in [2], they became lighter (28 simple tests instead of 57, 
8 runs of pgbench instead of 17).

> About the documentation:
> 
> Again, as already said in the previous review, please take into account 
> comments
> or justify why you do not do so, I do not think that this feature 
> should be
> given any pre-emminence: most of the time performance testing is about 
> all-is-
> well transactions which do not display any error. I do not claim that 
> it is not
> a useful feature, on the contrary I do think that testing under error 
> conditions
> is a good capability, but I just insist that it is a on the side 
> feature
> should not be put forward. As a consequence, the "maximum number of 
> transaction
> tries" should certainly not be the default first output of a summary 
> run.

I agree with you that there should be no pre-emminence for my feature. 
Here there're my reasons for the order in the reports (except for the 
"maximum number of transaction tries", because you discuss it later):
1) Per-statement report: errors and retries are printed at the end.
2) Other reports (main report, per-script report, progress report, 
transaction logging, aggregation logging):
- errors are printed before optional results because you don't need any 
options for getting errors;
- retries are printed at the end because they need the option 
--max-tries.

> I'm unclear about the added example added in the documentation. There
> are 71% errors, but 100% of transactions are reported as processed. If
> there were errors, then it is not a success, so the transaction were 
> not
> processed? To me it looks inconsistent. Also, while testing, it seems 
> that
> failed transactions are counted in tps, which I think is not 
> appropriate:
> 
> 
> About the feature:
> 
>  sh> PGOPTIONS='-c default_transaction_isolation=serializable' \
>        ./pgbench -P 1 -T 3 -r -M prepared -j 2 -c 4
>  starting vacuum...end.
>  progress: 1.0 s, 10845.8 tps, lat 0.091 ms stddev 0.491, 10474 failed
>  # NOT 10845.8 TPS...
>  progress: 2.0 s, 10534.6 tps, lat 0.094 ms stddev 0.658, 10203 failed
>  progress: 3.0 s, 10643.4 tps, lat 0.096 ms stddev 0.568, 10290 failed
>  ...
>  number of transactions actually processed: 32028 # NO!
>  number of errors: 30969 (96.694 %)
>  latency average = 2.833 ms
>  latency stddev = 1.508 ms
>  tps = 10666.720870 (including connections establishing) # NO
>  tps = 10683.034369 (excluding connections establishing) # NO
>  ...
> 
> For me this is all wrong. I think that the tps report is about 
> transactions
> that succeeded, not mere attempts. I cannot say that a transaction 
> which aborted
> was "actually processed"... as it was not.

Thank you, I agree and I will fix it.

> The order of reported elements is not logical:
> 
>  maximum number of transaction tries: 100
>  scaling factor: 10
>  query mode: prepared
>  number of clients: 4
>  number of threads: 2
>  duration: 3 s
>  number of transactions actually processed: 967
>  number of errors: 152 (15.719 %)
>  latency average = 9.630 ms
>  latency stddev = 13.366 ms
>  number of transactions retried: 623 (64.426 %)
>  number of retries: 32272
> 
> I would suggest to group everything about error handling in one block,
> eg something like:
> 
>  scaling factor: 10
>  query mode: prepared
>  number of clients: 4
>  number of threads: 2
>  duration: 3 s
>  number of transactions actually processed: 967
>  number of errors: 152 (15.719 %)
>  number of transactions retried: 623 (64.426 %)
>  number of retries: 32272
>  maximum number of transaction tries: 100
>  latency average = 9.630 ms
>  latency stddev = 13.366 ms

Do you think this is ok when only errors and the maximum number of 
transaction tries are printed. (because retries are printed if they are 
not-zero)? We can get something like this:

scaling factor: 10
query mode: prepared
number of clients: 4
number of threads: 2
duration: 3 s
number of transactions actually processed: 967
number of errors: 152 (15.719 %)
maximum number of transaction tries: 1 # default value
latency average = 9.630 ms
latency stddev = 13.366 ms

or this:

scaling factor: 10
query mode: prepared
number of clients: 4
number of threads: 2
duration: 3 s
number of transactions actually processed: 967
number of errors: 152 (15.719 %)
maximum number of transaction tries: 3 # not the default value
latency average = 9.630 ms
latency stddev = 13.366 ms

> Also, percent character should be stuck to its number: 15.719% to have
> the style more homogeneous (although there seems to be pre-existing
> inhomogeneities).
> 
> I would replace "transaction tries/retried" by "tries/retried", 
> everything
> is about transactions in the report anyway.
> 
> Without reading the documentation, the overall report semantics is 
> unclear,
> especially given the absurd tps results I got with the my first 
> attempt,
> as failing transactions are counted as "processed".

I agree and I will fix it.

> For the detailed report, ISTM that I already said in the previous 
> review that
> the 22 columns (?) indentation is much too large:
> 
>  0.078                   149                 30886  UPDATE
> pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;

You did not answer me in [1] about this:

>> Moreover the 25 characters
>> alignment is ugly, better use a much smaller alignment.
> 
> The variables for the numbers of failures and retries are of type int64
> since the variable for the total number of transactions has the same
> type. That's why such a large alignment (as I understand it now, enough
> 20 characters). Do you prefer floating alignemnts, depending on the
> maximum number of failures/retries for any command in any script?

> When giving the number of retries, I'd like to also have the average 
> number of
> retried per attempted transactions, whether they succeeded or failed in 
> the end.

Is this not a number of retried divided by the number of attempted 
transactions (this is now printed in the main report)? Or do you mean 
the average number of retried transactions per script run?

> About the code:
> 
> I'm at lost with the 7 states added to the automaton, where I would 
> have hoped
> that only 2 (eg RETRY & FAIL, or even less) would be enough.

Well, I will try to simplify it. As I can see now, there should be a 
separate code if we end a failed transaction block:
- if we try to end a failed transaction block and a failure occurs, this 
failure cannot be retried in any case;
- if we cannot retry a failure and there was a failed transaction block, 
we must go to the next command after it, and not to the next command 
after the failure.

> I would hope for something like "if (some error) state = ERROR",
> then in "ERROR" do the stats, check whether it should be retried, and 
> if
> so state = START... and we are done.

We discussed this question in [3] and [4]:

>>> I wondering whether the RETRY & FAILURE states could/should be 
>>> merged:
>>> 
>>>   on RETRY:
>>>     -> count retry
>>>     -> actually retry if < max_tries (reset client state, jump to
>>> command)
>>>     -> else count failure and skip to end of script
>>> 
>>> The start and end of transaction detection seem expensive (malloc,
>>> ...) and assume a one statement per command (what about "BEGIN \; ...
>>> \; COMMIT;", which is not necessarily the case, this limitation 
>>> should
>>> be documented. ISTM that the space normalization should be avoided,
>>> and something simpler/lighter should be devised? Possibly it should
>>> consider handling SAVEPOINT.
>> 
>> I divided these states because if there's a failed transaction block 
>> you
>> should end it before retrying. It means to go to states
>> CSTATE_START_COMMAND -> CSTATE_WAIT_RESULT -> CSTATE_END_COMMAND with
>> the appropriate command. How do you propose not to go to these states?
> 
> Hmmm. Maybe I'm wrong. I'll think about it.

> I'm wondering whether the whole feature could be simplified by
> considering that one script is one "transaction" (it is from the
> report point of view at least), and that any retry is for the full
> script only, from its beginning. That would remove the trying to guess
> at transactions begin or end, avoid scanning manually for subcommands,
> and so on.
>  - Would it make sense?
>  - Would it be ok for your use case?

I'm not sure if this makes sense: if we retry the script from the very 
beginning including successful transactions, there may be errors..

> The proposed version of the code looks unmaintainable to me. There are
> 3 levels of nested "switch/case" with state changes at the deepest 
> level.
> I cannot even see it on my screen which is not wide enough.

I understand your concern, although the 80-column limit breaks only for 
long strings for printf functions. I will try to simplify this by 
transferring the 2nd and the 3rd levels of nested "switch/case" into a 
separate function.

> There should be a typedef for "random_state", eg something like:
> 
>   typedef struct { unsigned short data[3]; } RandomState;
> 
> Please keep "const" declarations, eg "commandFailed".
> 
> I think that choosing script should depend on the thread random state, 
> not
> the client random state, so that a run would generate the same pattern 
> per
> thread, independently of which client finishes first.
> 
> I'm sceptical of the "--debug-fails" options. ISTM that --debug is 
> already there
> and should just be reused.

Thanks, I agree with you and I'll fix this.
(Therefore I assume that both the thread state and the client state will 
have a random state.)
How do you like the idea of creating several levels for debugging?

> Also, you have changed some existing error unrelated
> error messages with this option, especially in evalFunc, which is 
> clearly not
> appropriate:
> 
>  -    fprintf(stderr, "random range is too large\n");
>  +    if (debug_fails)
>  +        fprintf(stderr, "random range is too large\n");
> 
> Please let unrelated code as is.

I suppose this is a related code. If we do not change this, a program 
run that does not use debugging options can receive many error messages 
without aborting any clients. + it will be very difficult to get reports 
of progress among all this.

> I agree that function naming style is a already a mess, but I think 
> that
> new functions you add should use a common style, eg "is_compound" vs
> "canRetry".
> 
> Translating error strings to their enum should be put in a function.

Ok, I will fix it.

> I do not believe in the normalize_whitespace function: ISTM that it
> would turn  "SELECT LENGTH('    ');" to "SELECT LENGTH(' ');", which
> is not desirable. I do not think that it should be needed.

This function does not change the commands that are passed to the 
server; this just simplifies the function is_tx_block_end and makes the 
latter more understandable.

> I do not understand the "get subcommands of a compound command" strange
> re-parsing phase. There should be comments explaining what you want to
> achieve.

Well, I will write these comments. My reasons are as follows:
1) I need to know the number of non-empty subcommands for processing the 
command in CSTATE_WAIT_RESULT;
2) I need this parser to correctly distinguish subcommands in such cases 
as "SELECT ';';";
3) for any non-empty subcommand I need to know its initial sequence 
number, it is used to report errors/failures;
4) if the compound command contains only one non-empty subcommand (for 
example, the compound command ';;END;;'), it can be retried as a usual 
command;
5) retrying of failures in the compound command depends on its 
subcommands which start or complete the transaction block.

> I'm not sure about what happens if several begin/end appears
> on the line.

Maybe this section of the documentation [5] will help you?

> I'm not sure this whole thing should be done anyway.

Well, I will simplify it so the compound commands will not be retried in 
any case; but even now the parser is needed for reasons 1, 2 and 3 in 
the list above ("My reasons are as follows: ...").

> About the tests:
> 
> The 828 lines perl script seems over complicated, with pgbench & psql
> interacting together... Is all that really necessary? Isn't some (much) 
> simpler
> test possible and would be sufficient?

I wrote to you about this at the beginning of this letter ("3) About the 
tests. ...").

> The "node" is started but never stopped.

Thank you, I will fix it.

> For file contents, maybe the << 'EOF' here-document syntax would help 
> instead
> of using concatenated backslashed strings everywhere.

Thanks, but I'm not sure that it is better to open file handlers for 
printing in variables..

[1] 
https://www.postgresql.org/message-id/e4c5e8cefa4a8e88f1273b0f1ee29e56%40postgrespro.ru
[2] 
https://www.postgresql.org/message-id/894ac29c18f12a4fd6f8c97c9123a152%40postgrespro.ru
[3] 
https://www.postgresql.org/message-id/6ce8613f061001105654673506348c13%40postgrespro.ru
[4] 
https://www.postgresql.org/message-id/alpine.DEB.2.20.1707131818200.20175%40lancre
[5] 
https://www.postgresql.org/docs/devel/static/protocol-flow.html#PROTOCOL-FLOW-MULTI-STATEMENT

-- 
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: WIP Patch: Pgbench Serialization and deadlock errors

From
Fabien COELHO
Date:
Hello Marina,

A partial answer, to focus on how the feature may be simplified.

I apologise as it seems that I overlooked one of your mail. Changing the 
thread has not helped.

>> I'm wondering whether the whole feature could be simplified by
>> considering that one script is one "transaction" (it is from the
>> report point of view at least), and that any retry is for the full
>> script only, from its beginning. That would remove the trying to guess
>> at transactions begin or end, avoid scanning manually for subcommands,
>> and so on.
>>  - Would it make sense?
>>  - Would it be ok for your use case?
>
> I'm not sure if this makes sense: if we retry the script from the very 
> beginning including successful transactions, there may be errors..

What I'm suggesting is to simplify the problem by adding some restrictions 
to what kind of case which is handled, so as to simplify the code a lot.

I'd start by stating (i.e. documenting) that the features assumes that one 
script is just *one* transaction.

Note that pgbench somehow already assumes that one script is one 
transaction when it reports performance anyway.

If you want 2 transactions, then you have to put them in two scripts, 
which looks fine with me. Different transactions are expected to be 
independent, otherwise they should be merged into one transaction.

Under these restrictions, ISTM that a retry is something like:

   case ABORTED:
      if (we want to retry) {
         // do necessary stats
         // reset the initial state (random, vars, current command)
         state = START_TX; // loop
      }
      else {
        // count as failed...
        state = FINISHED; // or done.
      }
      break;

Once this works, maybe it could go a step further by restarting at 
savepoints. I'd put restrictions there to ease detecting a savepoint so 
that it cannot occur in a compound command for instance. This would 
probably require a new state. Fine.


A detail:

>> For file contents, maybe the << 'EOF' here-document syntax would help 
>> instead of using concatenated backslashed strings everywhere.
>
> Thanks, but I'm not sure that it is better to open file handlers for 
> printing in variables..

Perl here-document stuff is just a multiline string, no file is involved, 
it is different from the shell.

-- 
Fabien.


Re: WIP Patch: Pgbench Serialization and deadlock errors

From
Marina Polyakova
Date:
On 12-01-2018 15:47, Fabien COELHO wrote:
> Hello Marina,
> 
> A partial answer, to focus on how the feature may be simplified.

Thank you!

> I apologise as it seems that I overlooked one of your mail. Changing
> the thread has not helped.
> 
>>> I'm wondering whether the whole feature could be simplified by
>>> considering that one script is one "transaction" (it is from the
>>> report point of view at least), and that any retry is for the full
>>> script only, from its beginning. That would remove the trying to 
>>> guess
>>> at transactions begin or end, avoid scanning manually for 
>>> subcommands,
>>> and so on.
>>>  - Would it make sense?
>>>  - Would it be ok for your use case?
>> 
>> I'm not sure if this makes sense: if we retry the script from the very 
>> beginning including successful transactions, there may be errors..
> 
> What I'm suggesting is to simplify the problem by adding some
> restrictions to what kind of case which is handled, so as to simplify
> the code a lot.
> 
> I'd start by stating (i.e. documenting) that the features assumes that
> one script is just *one* transaction.
> 
> Note that pgbench somehow already assumes that one script is one
> transaction when it reports performance anyway.
> 
> If you want 2 transactions, then you have to put them in two scripts,
> which looks fine with me. Different transactions are expected to be
> independent, otherwise they should be merged into one transaction.

Therefore if the script consists of several single statements (= several 
transactions), you cannot retry them. For example, the script looked 
like this:

UPDATE xy1 SET x = 1 WHERE y = 1;
UPDATE xy2 SET x = 2 WHERE y = 2;
UPDATE xy3 SET x = 3 WHERE y = 3;

If this restriction is ok for you, I'll simplify the code :)

> Under these restrictions, ISTM that a retry is something like:
> 
>   case ABORTED:
>      if (we want to retry) {
>         // do necessary stats
>         // reset the initial state (random, vars, current command)
>         state = START_TX; // loop
>      }
>      else {
>        // count as failed...
>        state = FINISHED; // or done.
>      }
>      break;

If we successfully complete a failed transaction block and process its 
end command in CSTATE_END_COMMAND, we may want to make a retry. So do 
you think that in this case it is ok to go to CSTATE_ABORTED at the end 
of CSTATE_END_COMMAND?..

> Once this works, maybe it could go a step further by restarting at
> savepoints. I'd put restrictions there to ease detecting a savepoint
> so that it cannot occur in a compound command for instance. This would
> probably require a new state. Fine.

We discussed the savepoints earlier in [1]:

>> - if there's a failure what savepoint we should rollback to and start 
>> the
>> execution again?
> ...
>> Maybe to go to the last one, if it is not successful go to the 
>> previous
>> one etc. Retrying the entire transaction may take less time..
> 
> Well, I do not know that. My 0.02 € is that if there was a savepoint 
> then
> this is natural the restarting point of a transaction which has some
> recoverable error.

> 
> A detail:
> 
>>> For file contents, maybe the << 'EOF' here-document syntax would help 
>>> instead of using concatenated backslashed strings everywhere.
>> 
>> Thanks, but I'm not sure that it is better to open file handlers for 
>> printing in variables..
> 
> Perl here-document stuff is just a multiline string, no file is
> involved, it is different from the shell.

Oh, now googling was successful, thanks)

[1] 
https://www.postgresql.org/message-id/alpine.DEB.2.20.1707141637300.7871%40lancre

-- 
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: WIP Patch: Pgbench Serialization and deadlock errors

From
Fabien COELHO
Date:
Hello Marina,

>> If you want 2 transactions, then you have to put them in two scripts,
>> which looks fine with me. Different transactions are expected to be
>> independent, otherwise they should be merged into one transaction.
>
> Therefore if the script consists of several single statements (= several 
> transactions), you cannot retry them. For example, the script looked like 
> this:
>
> UPDATE xy1 SET x = 1 WHERE y = 1;
> UPDATE xy2 SET x = 2 WHERE y = 2;
> UPDATE xy3 SET x = 3 WHERE y = 3;
>
> If this restriction is ok for you, I'll simplify the code :)

Yes, that is what I'm suggesting. If you want to restart them, you can put 
them in 3 scripts.

>> Under these restrictions, ISTM that a retry is something like:
>>
>>   case ABORTED:
>>      if (we want to retry) {
>>         // do necessary stats
>>         // reset the initial state (random, vars, current command)
>>         state = START_TX; // loop
>>      }
>>      else {
>>        // count as failed...
>>        state = FINISHED; // or done.
>>      }
>>      break;
>
> If we successfully complete a failed transaction block and process its end 
> command in CSTATE_END_COMMAND, we may want to make a retry. So do you think 
> that in this case it is ok to go to CSTATE_ABORTED at the end of 
> CSTATE_END_COMMAND?..

Dunno.

I'm fine with having END_COMMAND skipping to START_TX if it can be done 
easily and cleanly, esp without code duplication.

ISTM that ABORTED & FINISHED are currently exactly the same. That would
put a particular use to aborted. Also, there are many points where the 
code may go to "aborted" state, so reusing it could help avoid duplicating 
stuff on each abort decision.

>> Once this works, maybe it could go a step further by restarting at
>> savepoints. I'd put restrictions there to ease detecting a savepoint
>> so that it cannot occur in a compound command for instance. This would
>> probably require a new state. Fine.
>
> We discussed the savepoints earlier in [1]:

Yep. I'm trying to suggest an incremental path with simple but yet quite 
useful things first.

-- 
Fabien.


Re: WIP Patch: Pgbench Serialization and deadlock errors

From
Marina Polyakova
Date:
On 12-01-2018 18:13, Fabien COELHO wrote:
> Hello Marina,

Hello, Fabien!

>>> If you want 2 transactions, then you have to put them in two scripts,
>>> which looks fine with me. Different transactions are expected to be
>>> independent, otherwise they should be merged into one transaction.
>> 
>> Therefore if the script consists of several single statements (= 
>> several transactions), you cannot retry them. For example, the script 
>> looked like this:
>> 
>> UPDATE xy1 SET x = 1 WHERE y = 1;
>> UPDATE xy2 SET x = 2 WHERE y = 2;
>> UPDATE xy3 SET x = 3 WHERE y = 3;
>> 
>> If this restriction is ok for you, I'll simplify the code :)
> 
> Yes, that is what I'm suggesting. If you want to restart them, you can
> put them in 3 scripts.

Okay, in the next patch I'll simplify the code.

>>> Under these restrictions, ISTM that a retry is something like:
>>> ...
>> 
>> If we successfully complete a failed transaction block and process its 
>> end command in CSTATE_END_COMMAND, we may want to make a retry. So do 
>> you think that in this case it is ok to go to CSTATE_ABORTED at the 
>> end of CSTATE_END_COMMAND?..
> 
> Dunno.
> 
> I'm fine with having END_COMMAND skipping to START_TX if it can be
> done easily and cleanly, esp without code duplication.

If I understand you correctly, I'm not sure that we should skip the 
statistics collector for the command that completes a failed transaction 
block.. + maybe for this command we will have only such statistics.

> ISTM that ABORTED & FINISHED are currently exactly the same. That would
> put a particular use to aborted. Also, there are many points where the
> code may go to "aborted" state, so reusing it could help avoid
> duplicating stuff on each abort decision.

Thanks, I agree with you.

>>> Once this works, maybe it could go a step further by restarting at
>>> savepoints. I'd put restrictions there to ease detecting a savepoint
>>> so that it cannot occur in a compound command for instance. This 
>>> would
>>> probably require a new state. Fine.
>> 
>> We discussed the savepoints earlier in [1]:
> 
> Yep. I'm trying to suggest an incremental path with simple but yet
> quite useful things first.

This question ("if there's a failure what savepoint we should rollback 
to and start the execution again? ...") mostly concerns the basic idea 
of how to maintain the savepoints in this feature, rather than the exact 
architecture of the code, so we can discuss this now :)

-- 
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: Re: WIP Patch: Pgbench Serialization and deadlock errors

From
David Steele
Date:
Hello Marina,

On 1/12/18 12:01 PM, Marina Polyakova wrote:
>>
>> Yep. I'm trying to suggest an incremental path with simple but yet
>> quite useful things first.
> 
> This question ("if there's a failure what savepoint we should rollback
> to and start the execution again? ...") mostly concerns the basic idea
> of how to maintain the savepoints in this feature, rather than the exact
> architecture of the code, so we can discuss this now :)

This patch was marked Waiting on Author on Jan 8 and no new patch was
submitted before this commitfest.

I think we should mark this patch as Returned with Feedback.

Thanks,
-- 
-David
david@pgmasters.net


Re: WIP Patch: Pgbench Serialization and deadlock errors

From
Marina Polyakova
Date:
On 05-03-2018 18:21, David Steele wrote:
> Hello Marina,

Hello, David!

> On 1/12/18 12:01 PM, Marina Polyakova wrote:
...
> 
> This patch was marked Waiting on Author on Jan 8 and no new patch was
> submitted before this commitfest.
> 
> I think we should mark this patch as Returned with Feedback.

I'm sorry, I was very busy with the patch to precalculate stable 
functions.. I'm working on a new version of this patch for pgbench 
unless, of course, it's too late for v11.

-- 
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: WIP Patch: Pgbench Serialization and deadlock errors

From
David Steele
Date:
Hi Marina,

On 3/6/18 4:45 AM, Marina Polyakova wrote:
> On 05-03-2018 18:21, David Steele wrote:
>> Hello Marina,
> 
> Hello, David!
> 
>> On 1/12/18 12:01 PM, Marina Polyakova wrote:
> ...
>>
>> This patch was marked Waiting on Author on Jan 8 and no new patch was
>> submitted before this commitfest.
>>
>> I think we should mark this patch as Returned with Feedback.
> 
> I'm sorry, I was very busy with the patch to precalculate stable
> functions.. I'm working on a new version of this patch for pgbench
> unless, of course, it's too late for v11.

Understood, but in fairness to other authors who got their patches
pushed for not providing a new patch before the CF, we should push this
one as well.

Since this is in Waiting on Author state I have marked it Returned with
Feedback.  Please submit to the next CF when you have a new patch.

Regards,
-- 
-David
david@pgmasters.net


Re: WIP Patch: Pgbench Serialization and deadlock errors

From
Marina Polyakova
Date:
On 06-03-2018 18:02, David Steele wrote:
> Hi Marina,
> 
> On 3/6/18 4:45 AM, Marina Polyakova wrote:
>> On 05-03-2018 18:21, David Steele wrote:
>>> Hello Marina,
>> 
>> Hello, David!
>> 
>>> On 1/12/18 12:01 PM, Marina Polyakova wrote:
>> ...
>>> 
>>> This patch was marked Waiting on Author on Jan 8 and no new patch was
>>> submitted before this commitfest.
>>> 
>>> I think we should mark this patch as Returned with Feedback.
>> 
>> I'm sorry, I was very busy with the patch to precalculate stable
>> functions.. I'm working on a new version of this patch for pgbench
>> unless, of course, it's too late for v11.
> 
> Understood, but in fairness to other authors who got their patches
> pushed for not providing a new patch before the CF, we should push this
> one as well.

Now I understand, thank you.

> Since this is in Waiting on Author state I have marked it Returned with
> Feedback.  Please submit to the next CF when you have a new patch.

Thanks, I'll do it.

-- 
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company