Thread: Re: pgbench and timestamps (bounced)
Fabien COELHO <coelho@cri.ensmp.fr> writes: > [Resent on hackers for CF registration, sorry for the noise] For the record, the original thread is at https://www.postgresql.org/message-id/flat/CAKVUGgQaZVAUi1Ex41H4wrru%3DFU%2BMfwgjG0aM1br6st7sz31Vw%40mail.gmail.com (I tried but failed to attach that thread to the CF entry, so we'll have to settle for leaving a breadcrumb in this thread.) > It requires a mutex around the commands, I tried to do some windows > implementation which may or may not work. Ugh, I'd really rather not do that. Even disregarding the effects of a mutex, though, my initial idea for fixing this has a big problem: if we postpone PREPARE of the query until first execution, then it's happening during timed execution of the benchmark scenario and thus distorting the timing figures. (Maybe if we'd always done it like that, it'd be okay, but I'm quite against changing the behavior now that it's stood for a long time.) However, perhaps there's more than one way to fix this. Once we've scanned all of the script and seen all the \set commands, we know (in principle) the set of all variable names that are in use. So maybe we could fix this by (1) During the initial scan of the script, make variable-table entries for every \set argument, with the values shown as undefined for the moment. Do not try to parse SQL commands in this scan, just collect them. (2) Make another scan in which we identify variable references in the SQL commands and issue PREPAREs (if enabled). (3) Perform the timed run. This avoids any impact of this bug fix on the semantics or timing of the benchmark proper. I'm not sure offhand whether this approach makes any difference for the concerns you had about identifying/suppressing variable references inside quotes. regards, tom lane
Hello Tom, >> It requires a mutex around the commands, I tried to do some windows >> implementation which may or may not work. > > Ugh, I'd really rather not do that. Even disregarding the effects > of a mutex, though, my initial idea for fixing this has a big problem: > if we postpone PREPARE of the query until first execution, then it's > happening during timed execution of the benchmark scenario and thus > distorting the timing figures. (Maybe if we'd always done it like > that, it'd be okay, but I'm quite against changing the behavior now > that it's stood for a long time.) Hmmm. Prepare is done *once* per client, ISTM that the impact on any statistically significant benchmark is nul in practice, or it would mean that the benchmark settings are too low. Second, the mutex is only used when absolutely necessary, only for the substitution part of the query (replacing :stuff by ?), because scripts are shared between threads. This is just once, in an unlikely case occuring at the beginning. > However, perhaps there's more than one way to fix this. Once we've > scanned all of the script and seen all the \set commands, we know > (in principle) the set of all variable names that are in use. > So maybe we could fix this by > > (1) During the initial scan of the script, make variable-table > entries for every \set argument, with the values shown as undefined > for the moment. Do not try to parse SQL commands in this scan, > just collect them. The issue with this approach is SELECT 1 AS one \gset pref_ which will generate a "pref_one" variable, and these names cannot be guessed without SQL parsing and possibly execution. That is why the preparation is delayed to when the variables are actually known. > (2) Make another scan in which we identify variable references > in the SQL commands and issue PREPAREs (if enabled). > (3) Perform the timed run. > > This avoids any impact of this bug fix on the semantics or timing > of the benchmark proper. I'm not sure offhand whether this > approach makes any difference for the concerns you had about > identifying/suppressing variable references inside quotes. I do not think this plan is workable, because of the \gset issue. I do not see that the conditional mutex and delayed PREPARE would have any significant (measurable) impact on an actual (reasonable) benchmark run. A workable solution would be that each client actually execute each script once before starting the actual benchmark. It would still need a mutex and also a sync barrier (which I'm proposing in some other thread). However this may raise some other issues because then some operations would be trigger out of the benchmarking run, which may or may not be desirable. So I'm not to keen to go that way, and I think the proposed solution is reasonable from a benchmarking point of view as the impact is minimal, although not zero. -- Fabien.
On 11.09.2020 16:59, Fabien COELHO wrote: > > Hello Tom, > >>> It requires a mutex around the commands, I tried to do some windows >>> implementation which may or may not work. >> >> Ugh, I'd really rather not do that. Even disregarding the effects >> of a mutex, though, my initial idea for fixing this has a big problem: >> if we postpone PREPARE of the query until first execution, then it's >> happening during timed execution of the benchmark scenario and thus >> distorting the timing figures. (Maybe if we'd always done it like >> that, it'd be okay, but I'm quite against changing the behavior now >> that it's stood for a long time.) > > Hmmm. > > Prepare is done *once* per client, ISTM that the impact on any > statistically significant benchmark is nul in practice, or it would > mean that the benchmark settings are too low. > > Second, the mutex is only used when absolutely necessary, only for the > substitution part of the query (replacing :stuff by ?), because > scripts are shared between threads. This is just once, in an unlikely > case occuring at the beginning. > >> However, perhaps there's more than one way to fix this. Once we've >> scanned all of the script and seen all the \set commands, we know >> (in principle) the set of all variable names that are in use. >> So maybe we could fix this by >> >> (1) During the initial scan of the script, make variable-table >> entries for every \set argument, with the values shown as undefined >> for the moment. Do not try to parse SQL commands in this scan, >> just collect them. > > The issue with this approach is > > SELECT 1 AS one \gset pref_ > > which will generate a "pref_one" variable, and these names cannot be > guessed without SQL parsing and possibly execution. That is why the > preparation is delayed to when the variables are actually known. > >> (2) Make another scan in which we identify variable references >> in the SQL commands and issue PREPAREs (if enabled). > >> (3) Perform the timed run. >> >> This avoids any impact of this bug fix on the semantics or timing >> of the benchmark proper. I'm not sure offhand whether this >> approach makes any difference for the concerns you had about >> identifying/suppressing variable references inside quotes. > > I do not think this plan is workable, because of the \gset issue. > > I do not see that the conditional mutex and delayed PREPARE would have > any significant (measurable) impact on an actual (reasonable) > benchmark run. > > A workable solution would be that each client actually execute each > script once before starting the actual benchmark. It would still need > a mutex and also a sync barrier (which I'm proposing in some other > thread). However this may raise some other issues because then some > operations would be trigger out of the benchmarking run, which may or > may not be desirable. > > So I'm not to keen to go that way, and I think the proposed solution > is reasonable from a benchmarking point of view as the impact is > minimal, although not zero. > CFM reminder. Hi, this entry is "Waiting on Author" and the thread was inactive for a while. I see this discussion still has some open questions. Are you going to continue working on it, or should I mark it as "returned with feedback" until a better time? -- Anastasia Lubennikova Postgres Professional: http://www.postgrespro.com The Russian Postgres Company
> CFM reminder. > > Hi, this entry is "Waiting on Author" and the thread was inactive for a > while. I see this discussion still has some open questions. Are you > going to continue working on it, or should I mark it as "returned with > feedback" until a better time? IMHO the proposed fix is reasonable and addresses the issue. I have responded to Tom's remarks about it, and it is waiting for his answer to my counter arguments. So ISTM that the patch is currently still waiting for some feedback. -- Fabien.
Fabien COELHO <coelho@cri.ensmp.fr> writes: >> Hi, this entry is "Waiting on Author" and the thread was inactive for a >> while. I see this discussion still has some open questions. Are you >> going to continue working on it, or should I mark it as "returned with >> feedback" until a better time? > IMHO the proposed fix is reasonable and addresses the issue. > I have responded to Tom's remarks about it, and it is waiting for his > answer to my counter arguments. So ISTM that the patch is currently still > waiting for some feedback. It looks like my unhappiness with injecting a pthread dependency into pgbench is going to be overtaken by events in the "option delaying queries" thread [1]. However, by the same token there are some conflicts between the two patchsets, and also I prefer the other thread's approach to portability (i.e. do it honestly, not with a private portability layer in pgbench.c). So I'm inclined to put the parts of this patch that require pthreads on hold till that lands. What remains that we could do now, and perhaps back-patch, is point (2) i.e. disallow digits as the first character of a pgbench variable name. That would be enough to "solve" the original bug report, and it does seem like it could be back-patched, while we're certainly not going to risk back-patching anything as portability-fraught as introducing mutexes. regards, tom lane [1] https://www.postgresql.org/message-id/flat/20200227180100.zyvjwzcpiokfsqm2%40alap3.anarazel.de
Hello Tom, >>> Hi, this entry is "Waiting on Author" and the thread was inactive for a >>> while. I see this discussion still has some open questions. Are you >>> going to continue working on it, or should I mark it as "returned with >>> feedback" until a better time? > >> IMHO the proposed fix is reasonable and addresses the issue. >> I have responded to Tom's remarks about it, and it is waiting for his >> answer to my counter arguments. So ISTM that the patch is currently still >> waiting for some feedback. > > It looks like my unhappiness with injecting a pthread dependency into > pgbench is going to be overtaken by events in the "option delaying > queries" thread [1]. However, by the same token there are some conflicts > between the two patchsets, and also I prefer the other thread's approach > to portability (i.e. do it honestly, not with a private portability layer > in pgbench.c). So I'm inclined to put the parts of this patch that > require pthreads on hold till that lands. Ok. This is fair enough. Portability is a pain thanks to Windows vs MacOS vs Linux approaches of implementing or not a standard. > What remains that we could do now, and perhaps back-patch, is point (2) > i.e. disallow digits as the first character of a pgbench variable name. I'm fine with that. > That would be enough to "solve" the original bug report, and it does seem > like it could be back-patched, while we're certainly not going to risk > back-patching anything as portability-fraught as introducing mutexes. Sure. I'm unable to do much pg work at the moment, but this should be eased quite soon. > [1] https://www.postgresql.org/message-id/flat/20200227180100.zyvjwzcpiokfsqm2%40alap3.anarazel.de -- Fabien Coelho - CRI, MINES ParisTech
Fabien COELHO <coelho@cri.ensmp.fr> writes: >> What remains that we could do now, and perhaps back-patch, is point (2) >> i.e. disallow digits as the first character of a pgbench variable name. > I'm fine with that. >> That would be enough to "solve" the original bug report, and it does seem >> like it could be back-patched, while we're certainly not going to risk >> back-patching anything as portability-fraught as introducing mutexes. > Sure. OK. I've pushed a patch that just does that much, and marked the commitfest entry closed. After the other thing lands, please rebase and resubmit what remains of this patch. regards, tom lane