Thread: benchmarking Flex practices
I decided to do some experiments with how we use Flex. The main takeaway is that backtracking, which we removed in 2005, doesn't seem to matter anymore for the core scanner. Also, state table size is of marginal importance. Using the information_schema Flex+Bison microbenchmark from Tom [1], I tested removing most of the "fail" rules designed to avoid backtracking ("decimalfail" is needed by PL/pgSQL). Below are the best times (most runs within 1%), followed by postgres binary size. The numbers are with Flex 2.5.35 on MacOS, no asserts or debugging symbols. HEAD: 1.53s 7139132 bytes HEAD minus "fail" rules (patch attached): 1.53s 6971204 bytes Surprisingly, it has the same performance and a much smaller binary. The size difference is because the size of the elements of the yy_transition array is constrained by the number of elements in the array. Since there are now fewer than INT16_MAX state transitions, the struct members go from 32 bit: struct yy_trans_info { flex_int32_t yy_verify; flex_int32_t yy_nxt; }; static yyconst struct yy_trans_info yy_transition[37045] = ... to 16 bit: struct yy_trans_info { flex_int16_t yy_verify; flex_int16_t yy_nxt; }; static yyconst struct yy_trans_info yy_transition[31763] = ... To test if array size was the deciding factor, I tried bloating it by essentially undoing commit a5ff502fcea. Doing so produced an array with 62583 elements and 32-bit members, so nearly quadruple in size, and it was still not much slower than HEAD: HEAD minus "fail" rules, minus %xusend/%xuiend: 1.56s 7343932 bytes While at it, I repeated the benchmark with different Flex flags: HEAD, plus -Cf: 1.60s 6995788 bytes HEAD, minus "fail" rules, plus -Cf: 1.59s 6979396 bytes HEAD, plus -Cfe: 1.65s 6868804 bytes So this recommendation of the Flex manual (-CF) still holds true. It's worth noting that using perfect hashing for keyword lookup (20% faster) had a much bigger effect than switching from -Cfe to -CF (7% faster). It would be nice to have confirmation to make sure I didn't err somewhere, and to try a more real-world benchmark. (Also for the moment I only have Linux on a virtual machine.) The regression tests pass, but some comments are now wrong. If it's confirmed that backtracking doesn't matter for recent Flex/hardware, disregarding it would make maintenance of our scanners a bit easier. [1] https://www.postgresql.org/message-id/14616.1558560331%40sss.pgh.pa.us -- John Naylor https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment
John Naylor <john.naylor@2ndquadrant.com> writes: > I decided to do some experiments with how we use Flex. The main > takeaway is that backtracking, which we removed in 2005, doesn't seem > to matter anymore for the core scanner. Also, state table size is of > marginal importance. Huh. That's really interesting, because removing backtracking was a demonstrable, significant win when we did it [1]. I wonder what has changed? I'd be prepared to believe that today's machines are more sensitive to the amount of cache space eaten by the tables --- but that idea seems contradicted by your result that the table size isn't important. (I'm wishing I'd documented the test case I used in 2005...) > The size difference is because the size of the elements of the > yy_transition array is constrained by the number of elements in the > array. Since there are now fewer than INT16_MAX state transitions, the > struct members go from 32 bit: > static yyconst struct yy_trans_info yy_transition[37045] = ... > to 16 bit: > static yyconst struct yy_trans_info yy_transition[31763] = ... Hm. Smaller binary is definitely nice, but 31763 is close enough to 32768 that I'd have little faith in the optimization surviving for long. Is there any way we could buy back some more transitions? > It would be nice to have confirmation to make sure I didn't err > somewhere, and to try a more real-world benchmark. I don't see much wrong with using information_schema.sql as a parser/lexer benchmark case. We should try to confirm the results on other platforms though. regards, tom lane [1] https://www.postgresql.org/message-id/8652.1116865895@sss.pgh.pa.us
Hi, On 2019-06-20 10:52:54 -0400, Tom Lane wrote: > John Naylor <john.naylor@2ndquadrant.com> writes: > > It would be nice to have confirmation to make sure I didn't err > > somewhere, and to try a more real-world benchmark. > > I don't see much wrong with using information_schema.sql as a parser/lexer > benchmark case. We should try to confirm the results on other platforms > though. Might be worth also testing with a more repetitive testcase to measure both cache locality and branch prediction. I assume that with information_schema there's enough variability that these effects play a smaller role. And there's plenty real-world cases where there's a *lot* of very similar statements being parsed over and over. I'd probably just measure the statements pgbench generates or such. Greetings, Andres Freund
On Fri, Jun 21, 2019 at 12:02 AM Andres Freund <andres@anarazel.de> wrote: > Might be worth also testing with a more repetitive testcase to measure > both cache locality and branch prediction. I assume that with > information_schema there's enough variability that these effects play a > smaller role. And there's plenty real-world cases where there's a *lot* > of very similar statements being parsed over and over. I'd probably just > measure the statements pgbench generates or such. I tried benchmarking with a query string with just BEGIN; UPDATE pgbench_accounts SET abalance = abalance + 1 WHERE aid = 1; SELECT abalance FROM pgbench_accounts WHERE aid = 1; UPDATE pgbench_tellers SET tbalance = tbalance + 1 WHERE tid = 1; UPDATE pgbench_branches SET bbalance = bbalance + 1 WHERE bid = 1; INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (1, 1, 1, 1, CURRENT_TIMESTAMP); END; repeated about 500 times. With this, backtracking is about 3% slower: HEAD: 1.15s patch: 1.19s patch + huge array: 1.19s That's possibly significant enough to be evidence for your assumption, as well as to persuade us to keep things as they are. On Thu, Jun 20, 2019 at 10:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote: > Huh. That's really interesting, because removing backtracking was a > demonstrable, significant win when we did it [1]. I wonder what has > changed? I'd be prepared to believe that today's machines are more > sensitive to the amount of cache space eaten by the tables --- but that > idea seems contradicted by your result that the table size isn't > important. (I'm wishing I'd documented the test case I used in 2005...) It's possible the code used with backtracking is better predicted than 15 years ago, but my uneducated hunch is our Bison grammar has gotten much worse in cache misses and branch prediction than the scanner has in 15 years. That, plus the recent keyword lookup optimization might have caused parsing to be completely dominated by Bison. If that's the case, the 3% slowdown above could be a significant portion of scanning in isolation. > Hm. Smaller binary is definitely nice, but 31763 is close enough to > 32768 that I'd have little faith in the optimization surviving for long. > Is there any way we could buy back some more transitions? I tried quickly ripping out the unicode escape support entirely. It builds with warnings, but the point is to just get the size -- that produced an array with only 28428 elements, and that's keeping all the no-backup rules intact. This might be unworkable and/or ugly, but I wonder if it's possible to pull unicode escape handling into the parsing stage, with "UESCAPE" being a keyword token that we have to peek ahead to check for. I'll look for other rules that could be more easily optimized, but I'm not terribly optimistic. -- John Naylor https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
I wrote: > I'll look for other rules that could be more > easily optimized, but I'm not terribly optimistic. I found a possible other way to bring the size of the transition table under 32k entries while keeping the existing no-backup rules in place: Replace the "quotecontinue" rule with a new state. In the attached draft patch, when Flex encounters a quote while inside any kind of quoted string, it saves the current state and enters %xqs (think 'quotestop'). If it then sees {whitespace_with_newline}{quote}, it reenters the previous state and continues to slurp the string, otherwise, it throws back everything and returns the string it just exited. Doing it this way is a bit uglier, but with some extra commentary it might not be too bad. The array is now 30883 entries. That's still a bit close for comfort, but shrinks the binary by 171kB on Linux x86-64 with Flex 2.6.4. The bad news is I have these baffling backup states in my new rules: State #133 is non-accepting - associated rule line numbers: 551 554 564 out-transitions: [ \000-\377 ] jam-transitions: EOF [] State #162 is non-accepting - associated rule line numbers: 551 554 564 out-transitions: [ \000-\377 ] jam-transitions: EOF [] 2 backing up (non-accepting) states. I already explicitly handle EOF, so I don't know what it's trying to tell me. If it can be fixed while keeping the array size, I'll do performance tests. -- John Naylor https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment
I wrote: > > I'll look for other rules that could be more > > easily optimized, but I'm not terribly optimistic. > > I found a possible other way to bring the size of the transition table > under 32k entries while keeping the existing no-backup rules in place: > Replace the "quotecontinue" rule with a new state. In the attached > draft patch, when Flex encounters a quote while inside any kind of > quoted string, it saves the current state and enters %xqs (think > 'quotestop'). If it then sees {whitespace_with_newline}{quote}, it > reenters the previous state and continues to slurp the string, > otherwise, it throws back everything and returns the string it just > exited. Doing it this way is a bit uglier, but with some extra > commentary it might not be too bad. I had an epiphany and managed to get rid of the backup states. Regression tests pass. The array is down to 30367 entries and the binary is smaller by 172kB on Linux x86-64. Performance is identical to master on both tests mentioned upthread. I'll clean this up and add it to the commitfest. -- John Naylor https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment
I wrote: > > I found a possible other way to bring the size of the transition table > > under 32k entries while keeping the existing no-backup rules in place: > > Replace the "quotecontinue" rule with a new state. In the attached > > draft patch, when Flex encounters a quote while inside any kind of > > quoted string, it saves the current state and enters %xqs (think > > 'quotestop'). If it then sees {whitespace_with_newline}{quote}, it > > reenters the previous state and continues to slurp the string, > > otherwise, it throws back everything and returns the string it just > > exited. Doing it this way is a bit uglier, but with some extra > > commentary it might not be too bad. > > I had an epiphany and managed to get rid of the backup states. > Regression tests pass. The array is down to 30367 entries and the > binary is smaller by 172kB on Linux x86-64. Performance is identical > to master on both tests mentioned upthread. I'll clean this up and add > it to the commitfest. For the commitfest: 0001 is a small patch to remove some unneeded generality from the current rules. This lowers the number of elements in the yy_transition array from 37045 to 36201. 0002 is a cleaned up version of the above, bring the size down to 29521. I haven't changed psqlscan.l or pgc.l, in case this approach is changed or rejected With the two together, the binary is about 175kB smaller than on HEAD. I also couldn't resist playing around with the idea upthread to handle unicode escapes in parser.c, which further reduces the number of states down to 21068, which allows some headroom for future additions without going back to 32-bit types in the transition array. It mostly works, but it's quite ugly and breaks the token position handling for unicode escape syntax errors, so it's not in a state to share. -- John Naylor https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment
John Naylor <john.naylor@2ndquadrant.com> writes: > 0001 is a small patch to remove some unneeded generality from the > current rules. This lowers the number of elements in the yy_transition > array from 37045 to 36201. I don't particularly like 0001. The two bits like this -whitespace ({space}+|{comment}) +whitespace ({space}|{comment}) seem likely to create performance problems for runs of whitespace, in that the lexer will now have to execute the associated action once per space character not just once for the whole run. Those actions are empty, but I don't think flex optimizes for that, and it's really flex's per-action overhead that I'm worried about. Note the comment in the "Performance" section of the flex manual: Another area where the user can increase a scanner's performance (and one that's easier to implement) arises from the fact that the longer the tokens matched, the faster the scanner will run. This is because with long tokens the processing of most input characters takes place in the (short) inner scanning loop, and does not often have to go through the additional work of setting up the scanning environment (e.g., `yytext') for the action. There are a bunch of higher-order productions that use "{whitespace}*", which is surely a bit redundant given the contents of {whitespace}. But maybe we could address that by replacing "{whitespace}*" with "{opt_whitespace}" defined as opt_whitespace ({space}*|{comment}) Not sure what impact if any that'd have on table size, but I'm quite sure that {whitespace} was defined with an eye to avoiding unnecessary lexer action cycles. As for the other two bits that are like -<xe>. { - /* This is only needed for \ just before EOF */ +<xe>\\ { my recollection is that those productions are defined that way to avoid a flex warning about not all possible input characters being accounted for in the <xe> (resp. <xdolq>) state. Maybe that warning is flex-version-dependent, or maybe this was just a worry and not something that actually produced a warning ... but I'm hesitant to change it. If we ever did get to flex's default action, that action is to echo the current input character to stdout, which would be Very Bad. As far as I can see, the point of 0002 is to have just one set of flex rules for the various variants of quotecontinue processing. That sounds OK, though I'm a bit surprised it makes this much difference in the table size. I would suggest that "state_before" needs a less generic name (maybe "state_before_xqs"?) and more than no comment. Possibly more to the point, it's not okay to have static state variables in the core scanner, so that variable needs to be kept in yyextra. (Don't remember offhand whether it's any more acceptable in the other scanners.) regards, tom lane
On Wed, Jul 3, 2019 at 5:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote: > > John Naylor <john.naylor@2ndquadrant.com> writes: > > 0001 is a small patch to remove some unneeded generality from the > > current rules. This lowers the number of elements in the yy_transition > > array from 37045 to 36201. > > I don't particularly like 0001. The two bits like this > > -whitespace ({space}+|{comment}) > +whitespace ({space}|{comment}) > > seem likely to create performance problems for runs of whitespace, in that > the lexer will now have to execute the associated action once per space > character not just once for the whole run. Okay. > There are a bunch of higher-order productions that use "{whitespace}*", > which is surely a bit redundant given the contents of {whitespace}. > But maybe we could address that by replacing "{whitespace}*" with > "{opt_whitespace}" defined as > > opt_whitespace ({space}*|{comment}) > > Not sure what impact if any that'd have on table size, but I'm quite sure > that {whitespace} was defined with an eye to avoiding unnecessary > lexer action cycles. It turns out that {opt_whitespace} as defined above is not equivalent to {whitespace}* , since the former is either a single comment or a single run of 0 or more whitespace chars (if I understand correctly). Using {opt_whitespace} for the UESCAPE rules on top of v3-0002, the regression tests pass, but queries like this fail with a syntax error: # select U&'d!0061t!+000061' uescape --comment '!'; There was in fact a substantial size reduction, though, so for curiosity's sake I tried just replacing {whitespace}* with {space}* in the UESCAPE rules, and the table shrank from 30367 (that's with 0002 only) to 24661. > As for the other two bits that are like > > -<xe>. { > - /* This is only needed for \ just before EOF */ > +<xe>\\ { > > my recollection is that those productions are defined that way to avoid a > flex warning about not all possible input characters being accounted for > in the <xe> (resp. <xdolq>) state. Maybe that warning is > flex-version-dependent, or maybe this was just a worry and not something > that actually produced a warning ... but I'm hesitant to change it. > If we ever did get to flex's default action, that action is to echo the > current input character to stdout, which would be Very Bad. FWIW, I tried Flex 2.5.35 and 2.6.4 with no warnings, and I did get a warning when I deleted any of those two rules. I'll leave them out for now, since this change was only good for ~500 fewer elements in the transition array. > As far as I can see, the point of 0002 is to have just one set of > flex rules for the various variants of quotecontinue processing. > That sounds OK, though I'm a bit surprised it makes this much difference > in the table size. I would suggest that "state_before" needs a less > generic name (maybe "state_before_xqs"?) and more than no comment. > Possibly more to the point, it's not okay to have static state variables > in the core scanner, so that variable needs to be kept in yyextra. > (Don't remember offhand whether it's any more acceptable in the other > scanners.) Ah yes, I got this idea from the ECPG scanner, which is not reentrant. Will fix. -- John Naylor https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, Jul 3, 2019 at 5:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote: > > As far as I can see, the point of 0002 is to have just one set of > flex rules for the various variants of quotecontinue processing. > That sounds OK, though I'm a bit surprised it makes this much difference > in the table size. I would suggest that "state_before" needs a less > generic name (maybe "state_before_xqs"?) and more than no comment. > Possibly more to the point, it's not okay to have static state variables > in the core scanner, so that variable needs to be kept in yyextra. v4-0001 is basically the same as v3-0002, with the state variable in yyextra. Since follow-on patches use it as well, I've named it state_before_quote_stop. I failed to come up with a nicer short name. With this applied, the transition table is reduced from 37045 to 30367. Since that's uncomfortably close to the 32k limit for 16 bit members, I hacked away further at UESCAPE bloat. 0002 unifies xusend and xuiend by saving the state of xui as well. This actually causes a performance regression, but it's more of a refactoring patch to prevent from having to create two additional start conditions in 0003 (of course it could be done that way if desired, but the savings won't be as great). In any case, the table is now down to 26074. 0003 creates a separate start condition so that UESCAPE and the expected quoted character after it are detected in separate states. This allows us to use standard whitespace skipping techniques and also to greatly simplify the uescapefail rule. The final size of the table is 23696. Removing UESCAPE entirely results in 21860, so this likely the most compact size of this feature. Performance is very similar to HEAD. Parsing the information schema might be a hair faster and pgbench-like queries with simple strings a hair slower, but the difference seems within the noise of variation. Parsing strings with UESCAPE likewise seems about the same. -- John Naylor https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment
John Naylor <john.naylor@2ndquadrant.com> writes: > [ v4 patches for trimming lexer table size ] I reviewed this and it looks pretty solid. One gripe I have is that I think it's best to limit backup-prevention tokens such as quotecontinuefail so that they match only exact prefixes of their "success" tokens. This seems clearer to me, and in at least some cases it can save a few flex states. The attached v5 patch does it like that and gets us down to 22331 states (from 23696). In some places it looks like you did that to avoid writing an explicit "{other}" match rule for an exclusive state, but I think it's better for readability and separation of concerns to go ahead and have those explicit rules (and it seems to make no difference table-size-wise). I also made some cosmetic changes (mostly improving comments) and smashed the patch series down to 1 patch, because I preferred to review it that way and we're not really going to commit these separately. I did a little bit of portability testing, to the extent of verifying that the oldest and newest Flex versions I have handy (2.5.33 and 2.6.4) agree on the table size change and get through regression tests. So I think we should be good from that end. We still need to propagate these changes into the psql and ecpg lexers, but I assume you were waiting to agree on the core patch before touching those. If you're good with the changes I made here, have at it. regards, tom lane diff --git a/src/backend/parser/scan.l b/src/backend/parser/scan.l index e1cae85..899da09 100644 --- a/src/backend/parser/scan.l +++ b/src/backend/parser/scan.l @@ -168,12 +168,14 @@ extern void core_yyset_column(int column_no, yyscan_t yyscanner); * <xd> delimited identifiers (double-quoted identifiers) * <xh> hexadecimal numeric string * <xq> standard quoted strings + * <xqs> quote stop (detect continued strings) * <xe> extended quoted strings (support backslash escape sequences) * <xdolq> $foo$ quoted strings * <xui> quoted identifier with Unicode escapes - * <xuiend> end of a quoted identifier with Unicode escapes, UESCAPE can follow * <xus> quoted string with Unicode escapes - * <xusend> end of a quoted string with Unicode escapes, UESCAPE can follow + * <xuend> end of a quoted string or identifier with Unicode escapes, + * UESCAPE can follow + * <xuchar> expecting escape character literal after UESCAPE * <xeu> Unicode surrogate pair in extended quoted string * * Remember to add an <<EOF>> case whenever you add a new exclusive state! @@ -185,12 +187,13 @@ extern void core_yyset_column(int column_no, yyscan_t yyscanner); %x xd %x xh %x xq +%x xqs %x xe %x xdolq %x xui -%x xuiend %x xus -%x xusend +%x xuend +%x xuchar %x xeu /* @@ -231,19 +234,18 @@ special_whitespace ({space}+|{comment}{newline}) horiz_whitespace ({horiz_space}|{comment}) whitespace_with_newline ({horiz_whitespace}*{newline}{special_whitespace}*) +quote ' +/* If we see {quote} then {quotecontinue}, the quoted string continues */ +quotecontinue {whitespace_with_newline}{quote} + /* - * To ensure that {quotecontinue} can be scanned without having to back up - * if the full pattern isn't matched, we include trailing whitespace in - * {quotestop}. This matches all cases where {quotecontinue} fails to match, - * except for {quote} followed by whitespace and just one "-" (not two, - * which would start a {comment}). To cover that we have {quotefail}. - * The actions for {quotestop} and {quotefail} must throw back characters - * beyond the quote proper. + * {quotecontinuefail} is needed to avoid lexer backup when we fail to match + * {quotecontinue}. It might seem that this could just be {whitespace}*, + * but if there's a dash after {whitespace_with_newline}, it must be consumed + * to see if there's another dash --- which would start a {comment} and thus + * allow continuation of the {quotecontinue} token. */ -quote ' -quotestop {quote}{whitespace}* -quotecontinue {quote}{whitespace_with_newline}{quote} -quotefail {quote}{whitespace}*"-" +quotecontinuefail {whitespace}*"-"? /* Bit string * It is tempting to scan the string for only those characters @@ -304,10 +306,15 @@ xdstop {dquote} xddouble {dquote}{dquote} xdinside [^"]+ -/* Unicode escapes */ -uescape [uU][eE][sS][cC][aA][pP][eE]{whitespace}*{quote}[^']{quote} +/* Optional UESCAPE after a quoted string or identifier with Unicode escapes */ +uescape [uU][eE][sS][cC][aA][pP][eE] +/* error rule to avoid backup */ +uescapefail [uU][eE][sS][cC][aA][pP]|[uU][eE][sS][cC][aA]|[uU][eE][sS][cC]|[uU][eE][sS]|[uU][eE]|[uU] + +/* escape character literal */ +uescchar {quote}[^']{quote} /* error rule to avoid backup */ -uescapefail [uU][eE][sS][cC][aA][pP][eE]{whitespace}*"-"|[uU][eE][sS][cC][aA][pP][eE]{whitespace}*{quote}[^']|[uU][eE][sS][cC][aA][pP][eE]{whitespace}*{quote}|[uU][eE][sS][cC][aA][pP][eE]{whitespace}*|[uU][eE][sS][cC][aA][pP]|[uU][eE][sS][cC][aA]|[uU][eE][sS][cC]|[uU][eE][sS]|[uU][eE]|[uU] +uesccharfail {quote}[^']|{quote} /* Quoted identifier with Unicode escapes */ xuistart [uU]&{dquote} @@ -315,10 +322,6 @@ xuistart [uU]&{dquote} /* Quoted string with Unicode escapes */ xusstart [uU]&{quote} -/* Optional UESCAPE after a quoted string or identifier with Unicode escapes. */ -xustop1 {uescapefail}? -xustop2 {uescape} - /* error rule to avoid backup */ xufailed [uU]& @@ -476,21 +479,10 @@ other . startlit(); addlitchar('b', yyscanner); } -<xb>{quotestop} | -<xb>{quotefail} { - yyless(1); - BEGIN(INITIAL); - yylval->str = litbufdup(yyscanner); - return BCONST; - } <xh>{xhinside} | <xb>{xbinside} { addlit(yytext, yyleng, yyscanner); } -<xh>{quotecontinue} | -<xb>{quotecontinue} { - /* ignore */ - } <xb><<EOF>> { yyerror("unterminated bit string literal"); } {xhstart} { @@ -505,13 +497,6 @@ other . startlit(); addlitchar('x', yyscanner); } -<xh>{quotestop} | -<xh>{quotefail} { - yyless(1); - BEGIN(INITIAL); - yylval->str = litbufdup(yyscanner); - return XCONST; - } <xh><<EOF>> { yyerror("unterminated hexadecimal string literal"); } {xnstart} { @@ -568,53 +553,71 @@ other . BEGIN(xus); startlit(); } -<xq,xe>{quotestop} | -<xq,xe>{quotefail} { - yyless(1); - BEGIN(INITIAL); + +<xb,xh,xq,xe,xus>{quote} { /* - * check that the data remains valid if it might have been - * made invalid by unescaping any chars. + * When we are scanning a quoted string and see an end + * quote, we must look ahead for a possible continuation. + * If we don't see one, we know the end quote was in fact + * the end of the string. To reduce the lexer table size, + * we use a single "xqs" state to do the lookahead for all + * types of strings. */ - if (yyextra->saw_non_ascii) - pg_verifymbstr(yyextra->literalbuf, - yyextra->literallen, - false); - yylval->str = litbufdup(yyscanner); - return SCONST; - } -<xus>{quotestop} | -<xus>{quotefail} { - /* throw back all but the quote */ - yyless(1); - /* xusend state looks for possible UESCAPE */ - BEGIN(xusend); + yyextra->state_before_quote_stop = YYSTATE; + BEGIN(xqs); } -<xusend>{whitespace} { - /* stay in xusend state over whitespace */ +<xqs>{quotecontinue} { + /* + * Found a quote continuation, so return to the in-quote + * state and continue scanning the literal. + */ + BEGIN(yyextra->state_before_quote_stop); } -<xusend><<EOF>> | -<xusend>{other} | -<xusend>{xustop1} { - /* no UESCAPE after the quote, throw back everything */ +<xqs>{quotecontinuefail} | +<xqs>{other} | +<xqs><<EOF>> { + /* + * Failed to see a quote continuation. Throw back + * everything after the end quote, and handle the string + * according to the state we were in previously. + */ yyless(0); - BEGIN(INITIAL); - yylval->str = litbuf_udeescape('\\', yyscanner); - return SCONST; - } -<xusend>{xustop2} { - /* found UESCAPE after the end quote */ - BEGIN(INITIAL); - if (!check_uescapechar(yytext[yyleng - 2])) + + switch (yyextra->state_before_quote_stop) { - SET_YYLLOC(); - ADVANCE_YYLLOC(yyleng - 2); - yyerror("invalid Unicode escape character"); + case xb: + BEGIN(INITIAL); + yylval->str = litbufdup(yyscanner); + return BCONST; + case xh: + BEGIN(INITIAL); + yylval->str = litbufdup(yyscanner); + return XCONST; + case xe: + /* fallthrough */ + case xq: + BEGIN(INITIAL); + + /* + * Check that the data remains valid if it + * might have been made invalid by unescaping + * any chars. + */ + if (yyextra->saw_non_ascii) + pg_verifymbstr(yyextra->literalbuf, + yyextra->literallen, + false); + yylval->str = litbufdup(yyscanner); + return SCONST; + case xus: + /* xuend state looks for possible UESCAPE */ + BEGIN(xuend); + break; + default: + yyerror("unhandled previous state in xqs"); } - yylval->str = litbuf_udeescape(yytext[yyleng - 2], - yyscanner); - return SCONST; } + <xq,xe,xus>{xqdouble} { addlitchar('\'', yyscanner); } @@ -693,9 +696,6 @@ other . if (c == '\0' || IS_HIGHBIT_SET(c)) yyextra->saw_non_ascii = true; } -<xq,xe,xus>{quotecontinue} { - /* ignore */ - } <xe>. { /* This is only needed for \ just before EOF */ addlitchar(yytext[0], yyscanner); @@ -770,53 +770,89 @@ other . return IDENT; } <xui>{dquote} { - yyless(1); - /* xuiend state looks for possible UESCAPE */ - BEGIN(xuiend); + /* xuend state looks for possible UESCAPE */ + yyextra->state_before_quote_stop = YYSTATE; + BEGIN(xuend); } -<xuiend>{whitespace} { - /* stay in xuiend state over whitespace */ + +<xuend,xuchar>{whitespace} { + /* stay in xuend or xuchar state over whitespace */ } -<xuiend><<EOF>> | -<xuiend>{other} | -<xuiend>{xustop1} { +<xuend>{uescapefail} | +<xuend>{other} | +<xuend><<EOF>> { /* no UESCAPE after the quote, throw back everything */ - char *ident; - int identlen; - yyless(0); - BEGIN(INITIAL); - if (yyextra->literallen == 0) - yyerror("zero-length delimited identifier"); - ident = litbuf_udeescape('\\', yyscanner); - identlen = strlen(ident); - if (identlen >= NAMEDATALEN) - truncate_identifier(ident, identlen, true); - yylval->str = ident; - return IDENT; + if (yyextra->state_before_quote_stop == xus) + { + BEGIN(INITIAL); + yylval->str = litbuf_udeescape('\\', yyscanner); + return SCONST; + } + else if (yyextra->state_before_quote_stop == xui) + { + char *ident; + int identlen; + + BEGIN(INITIAL); + if (yyextra->literallen == 0) + yyerror("zero-length delimited identifier"); + ident = litbuf_udeescape('\\', yyscanner); + identlen = strlen(ident); + if (identlen >= NAMEDATALEN) + truncate_identifier(ident, identlen, true); + yylval->str = ident; + return IDENT; + } + else + yyerror("unhandled previous state in xuend"); } -<xuiend>{xustop2} { +<xuend>{uescape} { /* found UESCAPE after the end quote */ - char *ident; - int identlen; - - BEGIN(INITIAL); - if (yyextra->literallen == 0) - yyerror("zero-length delimited identifier"); + BEGIN(xuchar); + } +<xuchar>{uescchar} { + /* found escape character literal after UESCAPE */ if (!check_uescapechar(yytext[yyleng - 2])) { SET_YYLLOC(); ADVANCE_YYLLOC(yyleng - 2); yyerror("invalid Unicode escape character"); } - ident = litbuf_udeescape(yytext[yyleng - 2], yyscanner); - identlen = strlen(ident); - if (identlen >= NAMEDATALEN) - truncate_identifier(ident, identlen, true); - yylval->str = ident; - return IDENT; + + if (yyextra->state_before_quote_stop == xus) + { + BEGIN(INITIAL); + yylval->str = litbuf_udeescape(yytext[yyleng - 2], + yyscanner); + return SCONST; + } + else if (yyextra->state_before_quote_stop == xui) + { + char *ident; + int identlen; + + BEGIN(INITIAL); + if (yyextra->literallen == 0) + yyerror("zero-length delimited identifier"); + ident = litbuf_udeescape(yytext[yyleng - 2], yyscanner); + identlen = strlen(ident); + if (identlen >= NAMEDATALEN) + truncate_identifier(ident, identlen, true); + yylval->str = ident; + return IDENT; + } + else + yyerror("unhandled previous state in xuchar"); + } +<xuchar>{uesccharfail} | +<xuchar>{other} | +<xuchar><<EOF>> { + SET_YYLLOC(); + yyerror("missing or invalid Unicode escape character"); } + <xd,xui>{xddouble} { addlitchar('"', yyscanner); } diff --git a/src/include/parser/scanner.h b/src/include/parser/scanner.h index 731a2bd..72c2a28 100644 --- a/src/include/parser/scanner.h +++ b/src/include/parser/scanner.h @@ -99,6 +99,7 @@ typedef struct core_yy_extra_type int literallen; /* actual current string length */ int literalalloc; /* current allocated buffer size */ + int state_before_quote_stop; /* start cond. before end quote */ int xcdepth; /* depth of nesting in slash-star comments */ char *dolqstart; /* current $foo$ quote start string */
On Wed, Jul 10, 2019 at 3:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote: > > John Naylor <john.naylor@2ndquadrant.com> writes: > > [ v4 patches for trimming lexer table size ] > > I reviewed this and it looks pretty solid. One gripe I have is > that I think it's best to limit backup-prevention tokens such as > quotecontinuefail so that they match only exact prefixes of their > "success" tokens. This seems clearer to me, and in at least some cases > it can save a few flex states. The attached v5 patch does it like that > and gets us down to 22331 states (from 23696). In some places it looks > like you did that to avoid writing an explicit "{other}" match rule for > an exclusive state, but I think it's better for readability and > separation of concerns to go ahead and have those explicit rules > (and it seems to make no difference table-size-wise). Looks good to me. > We still need to propagate these changes into the psql and ecpg lexers, > but I assume you were waiting to agree on the core patch before touching > those. If you're good with the changes I made here, have at it. I just made a couple additional cosmetic adjustments that made sense when diff'ing with the other scanners. Make check-world passes. Some notes: The pre-existing ecpg var "state_before" was a bit confusing when combined with the new var "state_before_quote_stop", and the former is also used with C-comments, so I decided to go with "state_before_lit_start" and "state_before_lit_stop". Even though comments aren't literals, it's less of a stretch than referring to quotes. To keep things consistent, I went with the latter var in psql and core. To get the regression tests to pass, I had to add this: psql_scan_in_quote(PsqlScanState state) { - return state->start_state != INITIAL; + return state->start_state != INITIAL && + state->start_state != xqs; } ...otherwise with parens we sometimes don't get the right prompt and we get empty lines echoed. Adding xuend and xuchar here didn't seem to make a difference. There might be something subtle I'm missing, so I thought I'd mention it. With the unicode escape rules brought over, the diff to the ecpg scanner is much cleaner now. The diff for C-comment rules were still pretty messy in comparison, so I made an attempt to clean that up in 0002. A bit off-topic, but I thought I should offer that while it was fresh in my head. -- John Naylor https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment
John Naylor <john.naylor@2ndquadrant.com> writes: > The pre-existing ecpg var "state_before" was a bit confusing when > combined with the new var "state_before_quote_stop", and the former is > also used with C-comments, so I decided to go with > "state_before_lit_start" and "state_before_lit_stop". Even though > comments aren't literals, it's less of a stretch than referring to > quotes. To keep things consistent, I went with the latter var in psql > and core. Hm, what do you think of "state_before_str_stop" instead? It seems to me that both "quote" and "lit" are pretty specific terms, so maybe we need something a bit vaguer. > To get the regression tests to pass, I had to add this: > psql_scan_in_quote(PsqlScanState state) > { > - return state->start_state != INITIAL; > + return state->start_state != INITIAL && > + state->start_state != xqs; > } > ...otherwise with parens we sometimes don't get the right prompt and > we get empty lines echoed. Adding xuend and xuchar here didn't seem to > make a difference. There might be something subtle I'm missing, so I > thought I'd mention it. I think you would see a difference if the regression tests had any cases with blank lines between a Unicode string/ident and the associated UESCAPE and escape-character literal. While poking at that, I also came across this unhappiness: regression=# select u&'foo' uescape 'bogus'; regression'# that is, psql thinks we're still in a literal at this point. That's because the uesccharfail rule eats "'b" and then we go to INITIAL state, so that consuming the last "'" puts us back in a string state. The backend would have thrown an error before parsing as far as the incomplete literal, so it doesn't care (or probably not, anyway), but that's not an option for psql. My first reaction as to how to fix this was to rip the xuend and xuchar states out of psql, and let it just lex UESCAPE as an identifier and the escape-character literal like any other literal. psql doesn't need to account for the escape character's effect on the meaning of the Unicode literal, so it doesn't have any need to lex the sequence as one big token. I think the same is true of ecpg though I've not looked really closely. However, my second reaction was that maybe you were on to something upthread when you speculated about postponing de-escaping of Unicode literals into the grammar. If we did it like that then we would not need to have this difference between the backend and frontend lexers, and we'd not have to worry about what psql_scan_in_quote should do about the whitespace before and after UESCAPE, either. So I'm feeling like maybe we should experiment to see what that solution looks like, before we commit to going in this direction. What do you think? > With the unicode escape rules brought over, the diff to the ecpg > scanner is much cleaner now. The diff for C-comment rules were still > pretty messy in comparison, so I made an attempt to clean that up in > 0002. A bit off-topic, but I thought I should offer that while it was > fresh in my head. I didn't really review this, but it looked like a fairly plausible change of the same ilk, ie combine rules by adding memory of the previous start state. regards, tom lane
On Sun, Jul 21, 2019 at 3:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote: > > John Naylor <john.naylor@2ndquadrant.com> writes: > > The pre-existing ecpg var "state_before" was a bit confusing when > > combined with the new var "state_before_quote_stop", and the former is > > also used with C-comments, so I decided to go with > > "state_before_lit_start" and "state_before_lit_stop". Even though > > comments aren't literals, it's less of a stretch than referring to > > quotes. To keep things consistent, I went with the latter var in psql > > and core. > > Hm, what do you think of "state_before_str_stop" instead? It seems > to me that both "quote" and "lit" are pretty specific terms, so > maybe we need something a bit vaguer. Sounds fine to me. > While poking at that, I also came across this unhappiness: > > regression=# select u&'foo' uescape 'bogus'; > regression'# > > that is, psql thinks we're still in a literal at this point. That's > because the uesccharfail rule eats "'b" and then we go to INITIAL > state, so that consuming the last "'" puts us back in a string state. > The backend would have thrown an error before parsing as far as the > incomplete literal, so it doesn't care (or probably not, anyway), > but that's not an option for psql. > > My first reaction as to how to fix this was to rip the xuend and > xuchar states out of psql, and let it just lex UESCAPE as an > identifier and the escape-character literal like any other literal. > psql doesn't need to account for the escape character's effect on > the meaning of the Unicode literal, so it doesn't have any need to > lex the sequence as one big token. I think the same is true of ecpg > though I've not looked really closely. > > However, my second reaction was that maybe you were on to something > upthread when you speculated about postponing de-escaping of > Unicode literals into the grammar. If we did it like that then > we would not need to have this difference between the backend and > frontend lexers, and we'd not have to worry about what > psql_scan_in_quote should do about the whitespace before and after > UESCAPE, either. > > So I'm feeling like maybe we should experiment to see what that > solution looks like, before we commit to going in this direction. > What do you think? Given the above wrinkles, I thought it was worth trying. Attached is a rough patch (don't mind the #include mess yet :-) ) that works like this: The lexer returns UCONST from xus and UIDENT from xui. The grammar has rules that are effectively: SCONST { do nothing} | UCONST { esc char is backslash } | UCONST UESCAPE SCONST { esc char is from $3 } ...where UESCAPE is now an unreserved keyword. To prevent shift-reduce conflicts, I added UIDENT to the %nonassoc precedence list to match IDENT, and for UESCAPE I added a %left precedence declaration. Maybe there's a more principled way. I also added an unsigned char type to the %union, but it worked fine on my compiler without it. litbuf_udeescape() and check_uescapechar() were moved to gram.y. The former had be massaged to give error messages similar to HEAD. They're not quite identical, but the position info is preserved. Some of the functions I moved around don't seem to have any test coverage, so I should eventually do some work in that regard. Notes: -Binary size is very close to v6. That is to say the grammar tables grew by about the same amount the scanner table shrank, so the binary is still about 200kB smaller than HEAD. -Performance is very close to v6 with the information_schema and pgbench-like queries with standard strings, which is to say also very close to HEAD. When the latter was changed to use Unicode escapes, however, it was about 15% slower than HEAD. That's a big regression and I haven't tried to pinpoint why. -psql was changed to follow suit. It doesn't think it's inside a string with your too-long escape char above, and it removes all blank lines from this query output: $ cat >> test-uesc-lit.sql SELECT u&'!0041' uescape '!' as col ; On HEAD and v6 I get this: $ ./inst/bin/psql -a -f test-uesc-lit.sql SELECT u&'!0041' uescape '!' as col ; col ----- A (1 row) -The ecpg changes here are only the bare minimum from HEAD to get it to compile, since I'm borrowing its additional token names (although they mean slightly different things). After a bit of experimentation, it's clear there's a bit more work needed to get it functional, and it's not easy to debug, so I'm putting that off until we decide whether this is the way forward. -- John Naylor https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment
On 07/24/19 03:45, John Naylor wrote: > On Sun, Jul 21, 2019 at 3:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote: >> However, my second reaction was that maybe you were on to something >> upthread when you speculated about postponing de-escaping of >> Unicode literals into the grammar. If we did it like that then Wow, yay. I hadn't been following this thread, but I had just recently looked over my own earlier musings [1] and started thinking "no, it would be outlandish to ask the lexer to return utf-8 always ... but what about postponing the de-escaping of Unicode literals into the grammar?" and had started to think about when I might have a chance to try making a patch. With the de-escaping postponed, I think we'd be able to move beyond the current odd situation where Unicode escapes can't describe non-ascii characters, in exactly and only the cases where you need them to. -Chap [1] https://www.postgresql.org/message-id/6688474e-7c28-b352-bcec-ea0ef59d7a1a%40anastigmatix.net
Chapman Flack <chap@anastigmatix.net> writes: > On 07/24/19 03:45, John Naylor wrote: >> On Sun, Jul 21, 2019 at 3:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote: >>> However, my second reaction was that maybe you were on to something >>> upthread when you speculated about postponing de-escaping of >>> Unicode literals into the grammar. If we did it like that then > With the de-escaping postponed, I think we'd be able to move beyond the > current odd situation where Unicode escapes can't describe non-ascii > characters, in exactly and only the cases where you need them to. How so? The grammar doesn't really have any more context information than the lexer does. (In both cases, it would be ugly but not really invalid for the transformation to depend on the database encoding, I think.) regards, tom lane
John Naylor <john.naylor@2ndquadrant.com> writes: > On Sun, Jul 21, 2019 at 3:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote: >> So I'm feeling like maybe we should experiment to see what that >> solution looks like, before we commit to going in this direction. >> What do you think? > Given the above wrinkles, I thought it was worth trying. Attached is a > rough patch (don't mind the #include mess yet :-) ) that works like > this: > The lexer returns UCONST from xus and UIDENT from xui. The grammar has > rules that are effectively: > SCONST { do nothing} > | UCONST { esc char is backslash } > | UCONST UESCAPE SCONST { esc char is from $3 } > ...where UESCAPE is now an unreserved keyword. To prevent shift-reduce > conflicts, I added UIDENT to the %nonassoc precedence list to match > IDENT, and for UESCAPE I added a %left precedence declaration. Maybe > there's a more principled way. I also added an unsigned char type to > the %union, but it worked fine on my compiler without it. I think it might be better to drop the separate "Uescape" production and just inline that into the calling rules, exactly per your sketch above. You could avoid duplicating the escape-checking logic by moving that into the str_udeescape support function. This would avoid the need for the "uchr" union variant, but more importantly it seems likely to be more future-proof: IME, any time you can avoid or postpone shift/reduce decisions, it's better to do so. I didn't try, but I think this might allow dropping the %left for UESCAPE. That bothers me because I don't understand why it's needed or what precedence level it ought to have. > litbuf_udeescape() and check_uescapechar() were moved to gram.y. The > former had be massaged to give error messages similar to HEAD. They're > not quite identical, but the position info is preserved. Some of the > functions I moved around don't seem to have any test coverage, so I > should eventually do some work in that regard. I don't terribly like the cross-calls you have between gram.y and scan.l in this formulation. If we have to make these functions (hexval() etc) non-static anyway, maybe we should shove them all into scansup.c? > -Binary size is very close to v6. That is to say the grammar tables > grew by about the same amount the scanner table shrank, so the binary > is still about 200kB smaller than HEAD. OK. > -Performance is very close to v6 with the information_schema and > pgbench-like queries with standard strings, which is to say also very > close to HEAD. When the latter was changed to use Unicode escapes, > however, it was about 15% slower than HEAD. That's a big regression > and I haven't tried to pinpoint why. I don't quite follow what you changed to produce the slower test case? But that seems to be something we'd better run to ground before deciding whether to go this way. > -The ecpg changes here are only the bare minimum from HEAD to get it > to compile, since I'm borrowing its additional token names (although > they mean slightly different things). After a bit of experimentation, > it's clear there's a bit more work needed to get it functional, and > it's not easy to debug, so I'm putting that off until we decide > whether this is the way forward. On the whole I like this approach, modulo the performance question. Let's try to work that out before worrying about ecpg. regards, tom lane
On Mon, Jul 29, 2019 at 10:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote: > > John Naylor <john.naylor@2ndquadrant.com> writes: > > > The lexer returns UCONST from xus and UIDENT from xui. The grammar has > > rules that are effectively: > > > SCONST { do nothing} > > | UCONST { esc char is backslash } > > | UCONST UESCAPE SCONST { esc char is from $3 } > > > ...where UESCAPE is now an unreserved keyword. To prevent shift-reduce > > conflicts, I added UIDENT to the %nonassoc precedence list to match > > IDENT, and for UESCAPE I added a %left precedence declaration. Maybe > > there's a more principled way. I also added an unsigned char type to > > the %union, but it worked fine on my compiler without it. > > I think it might be better to drop the separate "Uescape" production and > just inline that into the calling rules, exactly per your sketch above. > You could avoid duplicating the escape-checking logic by moving that into > the str_udeescape support function. This would avoid the need for the > "uchr" union variant, but more importantly it seems likely to be more > future-proof: IME, any time you can avoid or postpone shift/reduce > decisions, it's better to do so. > > I didn't try, but I think this might allow dropping the %left for > UESCAPE. That bothers me because I don't understand why it's > needed or what precedence level it ought to have. I tried this, and removing the %left still gives me a shift/reduce conflict, so I put some effort in narrowing down what's happening. If I remove the rules with UESCAPE individually, I find that precedence is not needed for Sconst -- only for Ident. I tried reverting all the rules to use the original "IDENT" token and one by one changed them to "Ident", and found 6 places where doing so caused a shift-reduce conflict: createdb_opt_name xmltable_column_option_el ColId type_function_name NonReservedWord ColLabel Due to the number of affected places, that didn't seem like a useful avenue to pursue, so I tried the following: -Making UESCAPE a reserved keyword or separate token type works, but other keyword types don't work. Not acceptable, but maybe useful info. -Giving UESCAPE an %nonassoc precedence above UIDENT works, even if UIDENT is the lowest in the list. This seems the least intrusive, so I went with that for v8. One possible downside is that UIDENT now no longer has the same precedence as IDENT. Not sure if it matters, but could we fix that contextually with "%prec IDENT"? > > litbuf_udeescape() and check_uescapechar() were moved to gram.y. The > > former had be massaged to give error messages similar to HEAD. They're > > not quite identical, but the position info is preserved. Some of the > > functions I moved around don't seem to have any test coverage, so I > > should eventually do some work in that regard. > > I don't terribly like the cross-calls you have between gram.y and scan.l > in this formulation. If we have to make these functions (hexval() etc) > non-static anyway, maybe we should shove them all into scansup.c? I ended up making them static inline in scansup.h since that seemed to reduce the performance impact (results below). I cribbed some of the surrogate pair queries from the jsonpath regression tests so we have some coverage here. Diff'ing from HEAD to patch, the locations are different for a couple cases (a side effect of the differen error handling style from scan.l). The patch seems to consistently point at an escape sequence, so I think it's okay to use that. HEAD, on the other hand, sometimes points at the start of the whole string: select U&'\de04\d83d'; -- surrogates in wrong order -psql:test_unicode.sql:10: ERROR: invalid Unicode surrogate pair at or near "U&'\de04\d83d'" +psql:test_unicode.sql:10: ERROR: invalid Unicode surrogate pair LINE 1: select U&'\de04\d83d'; - ^ + ^ select U&'\de04X'; -- orphan low surrogate -psql:test_unicode.sql:12: ERROR: invalid Unicode surrogate pair at or near "U&'\de04X'" +psql:test_unicode.sql:12: ERROR: invalid Unicode surrogate pair LINE 1: select U&'\de04X'; - ^ + ^ > > -Performance is very close to v6 with the information_schema and > > pgbench-like queries with standard strings, which is to say also very > > close to HEAD. When the latter was changed to use Unicode escapes, > > however, it was about 15% slower than HEAD. That's a big regression > > and I haven't tried to pinpoint why. > > I don't quite follow what you changed to produce the slower test case? > But that seems to be something we'd better run to ground before > deciding whether to go this way. So "pgbench str" below refers to driving the parser with this set of queries repeated a couple hundred times in a string: BEGIN; UPDATE pgbench_accounts SET abalance = abalance + 'foobarbaz' WHERE aid = 'foobarbaz'; SELECT abalance FROM pgbench_accounts WHERE aid = 'foobarbaz'; UPDATE pgbench_tellers SET tbalance = tbalance + 'foobarbaz' WHERE tid = 'foobarbaz'; UPDATE pgbench_branches SET bbalance = bbalance + 'foobarbaz' WHERE bid = 'foobarbaz'; INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES ('foobarbaz', 'foobarbaz', 'foobarbaz', 'foobarbaz', CURRENT_TIMESTAMP); END; and "pgbench uesc" is the same, but the string is U&'d!0061t!+000061' uescape '!' Now that I think of it, the regression in v7 was largely due to the fact that the parser has to call the lexer 3 times per string in this case, and that's going to be slower no matter what we do. I added a separate test with ordinary backslash escapes ("pgbench unicode"), rebased v6-8 onto the same commit on master, and reran the performance tests. The runs are generally +/- 1%: master v6 v7 v8 info-schema 1.49s 1.48s 1.50s 1.53s pgbench str 1.12s 1.13s 1.15s 1.17s pgbench unicode 1.29s 1.29s 1.40s 1.36s pgbench uesc 1.42s 1.44s 1.64s 1.58s Inlining hexval() and friends seems to have helped somewhat for unicode escapes, but I'd have to profile to improve that further. However, v8 has regressed from v7 enough with both simple strings and the information schema that it's a noticeable regression from HEAD. I'm guessing getting rid of the "Uescape" production is to blame, but I haven't tried reverting just that one piece. Since inlining the rules didn't seem to help with the precedence hacks, it seems like the separate production was a better way. Thoughts? -- John Naylor https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment
On Thu, Aug 1, 2019 at 8:51 PM John Naylor <john.naylor@2ndquadrant.com> wrote: > select U&'\de04\d83d'; -- surrogates in wrong order > -psql:test_unicode.sql:10: ERROR: invalid Unicode surrogate pair at > or near "U&'\de04\d83d'" > +psql:test_unicode.sql:10: ERROR: invalid Unicode surrogate pair > LINE 1: select U&'\de04\d83d'; > - ^ > + ^ > select U&'\de04X'; -- orphan low surrogate > -psql:test_unicode.sql:12: ERROR: invalid Unicode surrogate pair at > or near "U&'\de04X'" > +psql:test_unicode.sql:12: ERROR: invalid Unicode surrogate pair > LINE 1: select U&'\de04X'; > - ^ > + ^ While moving this to the September CF, I noticed this failure on Windows: +ERROR: Unicode escape values cannot be used for code point values above 007F when the server encoding is not UTF8 LINE 1: SELECT U&'\d83d\d83d'; ^ https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.50382 -- Thomas Munro https://enterprisedb.com
... it seems this patch needs attention, but I'm not sure from whom. The tests don't pass whenever the server encoding is not UTF8, so I suppose we should either have an alternate expected output file to account for that, or the tests should be removed. But anyway the code needs to be reviewed. -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Alvaro Herrera <alvherre@2ndquadrant.com> writes: > ... it seems this patch needs attention, but I'm not sure from whom. > The tests don't pass whenever the server encoding is not UTF8, so I > suppose we should either have an alternate expected output file to > account for that, or the tests should be removed. But anyway the code > needs to be reviewed. Yeah, I'm overdue to review it, but other things have taken precedence. The unportable test is not a problem at this point, since the patch isn't finished anyway. I'm not sure yet whether it'd be worth preserving that test case in the final version. regards, tom lane
[ My apologies for being so slow to get back to this ] John Naylor <john.naylor@2ndquadrant.com> writes: > Now that I think of it, the regression in v7 was largely due to the > fact that the parser has to call the lexer 3 times per string in this > case, and that's going to be slower no matter what we do. Ah, of course. I'm not too fussed about the performance of queries with an explicit UESCAPE clause, as that seems like a very minority use-case. What we do want to pay attention to is not regressing for plain identifiers/strings, and to a lesser extent the U& cases without UESCAPE. > Inlining hexval() and friends seems to have helped somewhat for > unicode escapes, but I'd have to profile to improve that further. > However, v8 has regressed from v7 enough with both simple strings and > the information schema that it's a noticeable regression from HEAD. > I'm guessing getting rid of the "Uescape" production is to blame, but > I haven't tried reverting just that one piece. Since inlining the > rules didn't seem to help with the precedence hacks, it seems like the > separate production was a better way. Thoughts? I have duplicated your performance tests here, and get more or less the same results (see below). I agree that the performance of the v8 patch isn't really where we want to be --- and it also seems rather invasive to gram.y, and hence error-prone. (If we do it like that, I bet my bottom dollar that somebody would soon commit a patch that adds a production using IDENT not Ident, and it'd take a long time to notice.) It struck me though that there's another solution we haven't discussed, and that's to make the token lookahead filter in parser.c do the work of converting UIDENT [UESCAPE SCONST] to IDENT, and similarly for the string case. I pursued that to the extent of developing the attached incomplete patch ("v9"), which looks reasonable from a performance standpoint. I get these results with tests using the drive_parser function: information_schema HEAD 3447.674 ms, 3433.498 ms, 3422.407 ms v6 3381.851 ms, 3442.478 ms, 3402.629 ms v7 3525.865 ms, 3441.038 ms, 3473.488 ms v8 3567.640 ms, 3488.417 ms, 3556.544 ms v9 3456.360 ms, 3403.635 ms, 3418.787 ms pgbench str HEAD 4414.046 ms, 4376.222 ms, 4356.468 ms v6 4304.582 ms, 4245.534 ms, 4263.562 ms v7 4395.815 ms, 4398.381 ms, 4460.304 ms v8 4475.706 ms, 4466.665 ms, 4471.048 ms v9 4392.473 ms, 4316.549 ms, 4318.472 ms pgbench unicode HEAD 4959.000 ms, 4921.751 ms, 4945.069 ms v6 4856.998 ms, 4802.996 ms, 4855.486 ms v7 5057.199 ms, 4948.342 ms, 4956.614 ms v8 5008.090 ms, 4963.641 ms, 4983.576 ms v9 4809.227 ms, 4767.355 ms, 4741.641 ms pgbench uesc HEAD 5114.401 ms, 5235.764 ms, 5200.567 ms v6 5030.156 ms, 5083.398 ms, 4986.974 ms v7 5915.508 ms, 5953.135 ms, 5929.775 ms v8 5678.810 ms, 5665.239 ms, 5645.696 ms v9 5648.965 ms, 5601.592 ms, 5600.480 ms (A note about what we're looking at: on my machine, after using cpupower to lock down the CPU frequency, and taskset to bind everything to one CPU socket, I can get numbers that are very repeatable, to 0.1% or so ... until I restart the postmaster, and then I get different but equally repeatable numbers. The difference can be several percent, which is a lot of noise compared to what we're looking for. I believe the explanation is that kernel ASLR has loaded the backend executable at some different addresses and so there are different cache-line-boundary effects. While I could lock that down too by disabling ASLR, the result would be to overemphasize chance effects of a particular set of cache line boundaries. So I prefer to run all the tests over again after restarting the postmaster, a few times, and then look at the overall set of results to see what things look like. Each number quoted above is median-of-three tests within a single postmaster run.) Anyway, my conclusion is that the attached patch is at least as fast as today's HEAD; it's not as fast as v6, but on the other hand it's an even smaller postmaster executable, so there's something to be said for that: $ size postg* text data bss dec hex filename 7478138 57928 203360 7739426 761822 postgres.head 7271218 57928 203360 7532506 72efda postgres.v6 7275810 57928 203360 7537098 7301ca postgres.v7 7276978 57928 203360 7538266 73065a postgres.v8 7266274 57928 203360 7527562 72dc8a postgres.v9 I based this on your v7 not v8; not sure if there's anything you want to salvage from v8. Generally, I'm pretty happy with this approach: it touches gram.y hardly at all, and it removes just about all of the complexity from scan.l. I'm happier about dropping the support code into parser.c than the other choices we've discussed. There's still undone work here, though: * I did not touch psql. Probably your patch is fine for that. * I did not do more with ecpg than get it to compile, using the same hacks as in your v7. It still fails its regression tests, but now the reason is that what we've done in parser/parser.c needs to be transposed into the identical functionality in ecpg/preproc/parser.c. Or at least some kind of functionality there. A problem with this approach is that it presumes we can reduce a UIDENT sequence to a plain IDENT, but to do so we need assumptions about the target encoding, and I'm not sure that ecpg should make any such assumptions. Maybe ecpg should just reject all cases that produce non-ASCII identifiers? (Probably it could be made to do something smarter with more work, but it's not clear to me that it's worth the trouble.) * I haven't convinced myself either way as to whether it'd be better to factor out the code duplicated between the UIDENT and UCONST cases in base_yylex. If this seems like a reasonable approach to you, please fill in the missing psql and ecpg bits. regards, tom lane diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y index c508684..1f10340 100644 --- a/src/backend/parser/gram.y +++ b/src/backend/parser/gram.y @@ -601,7 +601,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); * DOT_DOT is unused in the core SQL grammar, and so will always provoke * parse errors. It is needed by PL/pgSQL. */ -%token <str> IDENT FCONST SCONST BCONST XCONST Op +%token <str> IDENT UIDENT FCONST SCONST UCONST BCONST XCONST Op %token <ival> ICONST PARAM %token TYPECAST DOT_DOT COLON_EQUALS EQUALS_GREATER %token LESS_EQUALS GREATER_EQUALS NOT_EQUALS @@ -691,7 +691,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query); TREAT TRIGGER TRIM TRUE_P TRUNCATE TRUSTED TYPE_P TYPES_P - UNBOUNDED UNCOMMITTED UNENCRYPTED UNION UNIQUE UNKNOWN UNLISTEN UNLOGGED + UESCAPE UNBOUNDED UNCOMMITTED UNENCRYPTED UNION UNIQUE UNKNOWN UNLISTEN UNLOGGED UNTIL UPDATE USER USING VACUUM VALID VALIDATE VALIDATOR VALUE_P VALUES VARCHAR VARIADIC VARYING @@ -15374,6 +15374,7 @@ unreserved_keyword: | TRUSTED | TYPE_P | TYPES_P + | UESCAPE | UNBOUNDED | UNCOMMITTED | UNENCRYPTED diff --git a/src/backend/parser/parser.c b/src/backend/parser/parser.c index 4c0c258..e64f701 100644 --- a/src/backend/parser/parser.c +++ b/src/backend/parser/parser.c @@ -23,6 +23,12 @@ #include "parser/gramparse.h" #include "parser/parser.h" +#include "parser/scansup.h" +#include "mb/pg_wchar.h" + +static bool check_uescapechar(unsigned char escape); +static char *str_udeescape(char escape, char *str, int position, + core_yyscan_t yyscanner); /* @@ -75,6 +81,10 @@ raw_parser(const char *str) * scanner backtrack, which would cost more performance than this filter * layer does. * + * We also use this filter to convert UIDENT and UCONST sequences into + * plain IDENT and SCONST tokens. While that could be handled by additional + * productions in the main grammar, it's more efficient to do it like this. + * * The filter also provides a convenient place to translate between * the core_YYSTYPE and YYSTYPE representations (which are really the * same thing anyway, but notationally they're different). @@ -104,7 +114,7 @@ base_yylex(YYSTYPE *lvalp, YYLTYPE *llocp, core_yyscan_t yyscanner) * If this token isn't one that requires lookahead, just return it. If it * does, determine the token length. (We could get that via strlen(), but * since we have such a small set of possibilities, hardwiring seems - * feasible and more efficient.) + * feasible and more efficient --- at least for the fixed-length cases.) */ switch (cur_token) { @@ -117,6 +127,10 @@ base_yylex(YYSTYPE *lvalp, YYLTYPE *llocp, core_yyscan_t yyscanner) case WITH: cur_token_length = 4; break; + case UIDENT: + case UCONST: + cur_token_length = strlen(yyextra->core_yy_extra.scanbuf + *llocp); + break; default: return cur_token; } @@ -190,7 +204,311 @@ base_yylex(YYSTYPE *lvalp, YYLTYPE *llocp, core_yyscan_t yyscanner) break; } break; + + case UIDENT: + /* Look ahead for UESCAPE */ + if (next_token == UESCAPE) + { + /* Yup, so get third token, which had better be SCONST */ + const char *escstr; + + /* Again save and restore *llocp */ + cur_yylloc = *llocp; + + /* Get third token */ + next_token = core_yylex(&(yyextra->lookahead_yylval), + llocp, yyscanner); + + /* If we throw error here, it will point to third token */ + if (next_token != SCONST) + scanner_yyerror("UESCAPE must be followed by a simple string literal", + yyscanner); + + escstr = yyextra->lookahead_yylval.str; + if (strlen(escstr) != 1 || !check_uescapechar(escstr[0])) + scanner_yyerror("invalid Unicode escape character", + yyscanner); + + /* Now restore *llocp; errors will point to first token */ + *llocp = cur_yylloc; + + /* Apply Unicode conversion */ + lvalp->core_yystype.str = + str_udeescape(escstr[0], + lvalp->core_yystype.str, + *llocp, + yyscanner); + + /* + * We don't need to un-revert truncation of UESCAPE. What we + * do want to do is clear have_lookahead, thereby consuming + * all three tokens. + */ + yyextra->have_lookahead = false; + } + else + { + /* No UESCAPE, so convert using default escape character */ + lvalp->core_yystype.str = + str_udeescape('\\', + lvalp->core_yystype.str, + *llocp, + yyscanner); + } + /* It's an identifier, so truncate as appropriate */ + truncate_identifier(lvalp->core_yystype.str, + strlen(lvalp->core_yystype.str), + true); + cur_token = IDENT; + break; + + case UCONST: + /* Look ahead for UESCAPE */ + if (next_token == UESCAPE) + { + /* Yup, so get third token, which had better be SCONST */ + const char *escstr; + + /* Again save and restore *llocp */ + cur_yylloc = *llocp; + + /* Get third token */ + next_token = core_yylex(&(yyextra->lookahead_yylval), + llocp, yyscanner); + + /* If we throw error here, it will point to third token */ + if (next_token != SCONST) + scanner_yyerror("UESCAPE must be followed by a simple string literal", + yyscanner); + + escstr = yyextra->lookahead_yylval.str; + if (strlen(escstr) != 1 || !check_uescapechar(escstr[0])) + scanner_yyerror("invalid Unicode escape character", + yyscanner); + + /* Now restore *llocp; errors will point to first token */ + *llocp = cur_yylloc; + + /* Apply Unicode conversion */ + lvalp->core_yystype.str = + str_udeescape(escstr[0], + lvalp->core_yystype.str, + *llocp, + yyscanner); + + /* + * We don't need to un-revert truncation of UESCAPE. What we + * do want to do is clear have_lookahead, thereby consuming + * all three tokens. + */ + yyextra->have_lookahead = false; + } + else + { + /* No UESCAPE, so convert using default escape character */ + lvalp->core_yystype.str = + str_udeescape('\\', + lvalp->core_yystype.str, + *llocp, + yyscanner); + } + cur_token = SCONST; + break; } return cur_token; } + +/* convert hex digit (caller should have verified that) to value */ +static unsigned int +hexval(unsigned char c) +{ + if (c >= '0' && c <= '9') + return c - '0'; + if (c >= 'a' && c <= 'f') + return c - 'a' + 0xA; + if (c >= 'A' && c <= 'F') + return c - 'A' + 0xA; + elog(ERROR, "invalid hexadecimal digit"); + return 0; /* not reached */ +} + +/* is Unicode code point acceptable in database's encoding? */ +static void +check_unicode_value(pg_wchar c, int pos, core_yyscan_t yyscanner) +{ + /* See also addunicode() in scan.l */ + if (c == 0 || c > 0x10FFFF) + ereport(ERROR, + (errcode(ERRCODE_SYNTAX_ERROR), + errmsg("invalid Unicode escape value"), + scanner_errposition(pos, yyscanner))); + + if (c > 0x7F && GetDatabaseEncoding() != PG_UTF8) + ereport(ERROR, + (errcode(ERRCODE_SYNTAX_ERROR), + errmsg("Unicode escape values cannot be used for code point values above 007F when the server encodingis not UTF8"), + scanner_errposition(pos, yyscanner))); +} + +/* is 'escape' acceptable as Unicode escape character (UESCAPE syntax) ? */ +static bool +check_uescapechar(unsigned char escape) +{ + if (isxdigit(escape) + || escape == '+' + || escape == '\'' + || escape == '"' + || scanner_isspace(escape)) + return false; + else + return true; +} + +/* Process Unicode escapes in "str", producing a palloc'd plain string */ +static char * +str_udeescape(char escape, char *str, int position, + core_yyscan_t yyscanner) +{ + char *new, + *in, + *out; + int str_length; + pg_wchar pair_first = 0; + + str_length = strlen(str); + + /* + * This relies on the subtle assumption that a UTF-8 expansion cannot be + * longer than its escaped representation. + */ + new = palloc(str_length + 1); + + in = str; + out = new; + while (*in) + { + if (in[0] == escape) + { + if (in[1] == escape) + { + if (pair_first) + goto invalid_pair; + *out++ = escape; + in += 2; + } + else if (isxdigit((unsigned char) in[1]) && + isxdigit((unsigned char) in[2]) && + isxdigit((unsigned char) in[3]) && + isxdigit((unsigned char) in[4])) + { + pg_wchar unicode; + + unicode = (hexval(in[1]) << 12) + + (hexval(in[2]) << 8) + + (hexval(in[3]) << 4) + + hexval(in[4]); + check_unicode_value(unicode, + position + in - str + 3, /* 3 for U&" */ + yyscanner); + if (pair_first) + { + if (is_utf16_surrogate_second(unicode)) + { + unicode = surrogate_pair_to_codepoint(pair_first, unicode); + pair_first = 0; + } + else + goto invalid_pair; + } + else if (is_utf16_surrogate_second(unicode)) + goto invalid_pair; + + if (is_utf16_surrogate_first(unicode)) + pair_first = unicode; + else + { + unicode_to_utf8(unicode, (unsigned char *) out); + out += pg_mblen(out); + } + in += 5; + } + else if (in[1] == '+' && + isxdigit((unsigned char) in[2]) && + isxdigit((unsigned char) in[3]) && + isxdigit((unsigned char) in[4]) && + isxdigit((unsigned char) in[5]) && + isxdigit((unsigned char) in[6]) && + isxdigit((unsigned char) in[7])) + { + pg_wchar unicode; + + unicode = (hexval(in[2]) << 20) + + (hexval(in[3]) << 16) + + (hexval(in[4]) << 12) + + (hexval(in[5]) << 8) + + (hexval(in[6]) << 4) + + hexval(in[7]); + check_unicode_value(unicode, + position + in - str + 3, /* 3 for U&" */ + yyscanner); + if (pair_first) + { + if (is_utf16_surrogate_second(unicode)) + { + unicode = surrogate_pair_to_codepoint(pair_first, unicode); + pair_first = 0; + } + else + goto invalid_pair; + } + else if (is_utf16_surrogate_second(unicode)) + goto invalid_pair; + + if (is_utf16_surrogate_first(unicode)) + pair_first = unicode; + else + { + unicode_to_utf8(unicode, (unsigned char *) out); + out += pg_mblen(out); + } + in += 8; + } + else + ereport(ERROR, + (errcode(ERRCODE_SYNTAX_ERROR), + errmsg("invalid Unicode escape value"), + scanner_errposition(position + in - str + 3, /* 3 for U&" */ + yyscanner))); + } + else + { + if (pair_first) + goto invalid_pair; + + *out++ = *in++; + } + } + + /* unfinished surrogate pair? */ + if (pair_first) + goto invalid_pair; + + *out = '\0'; + + /* + * We could skip pg_verifymbstr if we didn't process any non-7-bit-ASCII + * codes; but it's probably not worth the trouble, since this isn't likely + * to be a performance-critical path. + */ + pg_verifymbstr(new, out - new, false); + return new; + +invalid_pair: + ereport(ERROR, + (errcode(ERRCODE_SYNTAX_ERROR), + errmsg("invalid Unicode surrogate pair"), + scanner_errposition(position + in - str + 3, /* 3 for U&" */ + yyscanner))); + return NULL; /* keep compiler quiet */ +} diff --git a/src/backend/parser/scan.l b/src/backend/parser/scan.l index e1cae85..a96af2c 100644 --- a/src/backend/parser/scan.l +++ b/src/backend/parser/scan.l @@ -110,14 +110,9 @@ const uint16 ScanKeywordTokens[] = { static void addlit(char *ytext, int yleng, core_yyscan_t yyscanner); static void addlitchar(unsigned char ychar, core_yyscan_t yyscanner); static char *litbufdup(core_yyscan_t yyscanner); -static char *litbuf_udeescape(unsigned char escape, core_yyscan_t yyscanner); static unsigned char unescape_single_char(unsigned char c, core_yyscan_t yyscanner); static int process_integer_literal(const char *token, YYSTYPE *lval); -static bool is_utf16_surrogate_first(pg_wchar c); -static bool is_utf16_surrogate_second(pg_wchar c); -static pg_wchar surrogate_pair_to_codepoint(pg_wchar first, pg_wchar second); static void addunicode(pg_wchar c, yyscan_t yyscanner); -static bool check_uescapechar(unsigned char escape); #define yyerror(msg) scanner_yyerror(msg, yyscanner) @@ -168,12 +163,11 @@ extern void core_yyset_column(int column_no, yyscan_t yyscanner); * <xd> delimited identifiers (double-quoted identifiers) * <xh> hexadecimal numeric string * <xq> standard quoted strings + * <xqs> quote stop (detect continued strings) * <xe> extended quoted strings (support backslash escape sequences) * <xdolq> $foo$ quoted strings * <xui> quoted identifier with Unicode escapes - * <xuiend> end of a quoted identifier with Unicode escapes, UESCAPE can follow * <xus> quoted string with Unicode escapes - * <xusend> end of a quoted string with Unicode escapes, UESCAPE can follow * <xeu> Unicode surrogate pair in extended quoted string * * Remember to add an <<EOF>> case whenever you add a new exclusive state! @@ -185,12 +179,11 @@ extern void core_yyset_column(int column_no, yyscan_t yyscanner); %x xd %x xh %x xq +%x xqs %x xe %x xdolq %x xui -%x xuiend %x xus -%x xusend %x xeu /* @@ -231,19 +224,18 @@ special_whitespace ({space}+|{comment}{newline}) horiz_whitespace ({horiz_space}|{comment}) whitespace_with_newline ({horiz_whitespace}*{newline}{special_whitespace}*) +quote ' +/* If we see {quote} then {quotecontinue}, the quoted string continues */ +quotecontinue {whitespace_with_newline}{quote} + /* - * To ensure that {quotecontinue} can be scanned without having to back up - * if the full pattern isn't matched, we include trailing whitespace in - * {quotestop}. This matches all cases where {quotecontinue} fails to match, - * except for {quote} followed by whitespace and just one "-" (not two, - * which would start a {comment}). To cover that we have {quotefail}. - * The actions for {quotestop} and {quotefail} must throw back characters - * beyond the quote proper. + * {quotecontinuefail} is needed to avoid lexer backup when we fail to match + * {quotecontinue}. It might seem that this could just be {whitespace}*, + * but if there's a dash after {whitespace_with_newline}, it must be consumed + * to see if there's another dash --- which would start a {comment} and thus + * allow continuation of the {quotecontinue} token. */ -quote ' -quotestop {quote}{whitespace}* -quotecontinue {quote}{whitespace_with_newline}{quote} -quotefail {quote}{whitespace}*"-" +quotecontinuefail {whitespace}*"-"? /* Bit string * It is tempting to scan the string for only those characters @@ -304,21 +296,12 @@ xdstop {dquote} xddouble {dquote}{dquote} xdinside [^"]+ -/* Unicode escapes */ -uescape [uU][eE][sS][cC][aA][pP][eE]{whitespace}*{quote}[^']{quote} -/* error rule to avoid backup */ -uescapefail [uU][eE][sS][cC][aA][pP][eE]{whitespace}*"-"|[uU][eE][sS][cC][aA][pP][eE]{whitespace}*{quote}[^']|[uU][eE][sS][cC][aA][pP][eE]{whitespace}*{quote}|[uU][eE][sS][cC][aA][pP][eE]{whitespace}*|[uU][eE][sS][cC][aA][pP]|[uU][eE][sS][cC][aA]|[uU][eE][sS][cC]|[uU][eE][sS]|[uU][eE]|[uU] - /* Quoted identifier with Unicode escapes */ xuistart [uU]&{dquote} /* Quoted string with Unicode escapes */ xusstart [uU]&{quote} -/* Optional UESCAPE after a quoted string or identifier with Unicode escapes. */ -xustop1 {uescapefail}? -xustop2 {uescape} - /* error rule to avoid backup */ xufailed [uU]& @@ -476,21 +459,10 @@ other . startlit(); addlitchar('b', yyscanner); } -<xb>{quotestop} | -<xb>{quotefail} { - yyless(1); - BEGIN(INITIAL); - yylval->str = litbufdup(yyscanner); - return BCONST; - } <xh>{xhinside} | <xb>{xbinside} { addlit(yytext, yyleng, yyscanner); } -<xh>{quotecontinue} | -<xb>{quotecontinue} { - /* ignore */ - } <xb><<EOF>> { yyerror("unterminated bit string literal"); } {xhstart} { @@ -505,13 +477,6 @@ other . startlit(); addlitchar('x', yyscanner); } -<xh>{quotestop} | -<xh>{quotefail} { - yyless(1); - BEGIN(INITIAL); - yylval->str = litbufdup(yyscanner); - return XCONST; - } <xh><<EOF>> { yyerror("unterminated hexadecimal string literal"); } {xnstart} { @@ -568,53 +533,67 @@ other . BEGIN(xus); startlit(); } -<xq,xe>{quotestop} | -<xq,xe>{quotefail} { - yyless(1); - BEGIN(INITIAL); + +<xb,xh,xq,xe,xus>{quote} { /* - * check that the data remains valid if it might have been - * made invalid by unescaping any chars. + * When we are scanning a quoted string and see an end + * quote, we must look ahead for a possible continuation. + * If we don't see one, we know the end quote was in fact + * the end of the string. To reduce the lexer table size, + * we use a single "xqs" state to do the lookahead for all + * types of strings. */ - if (yyextra->saw_non_ascii) - pg_verifymbstr(yyextra->literalbuf, - yyextra->literallen, - false); - yylval->str = litbufdup(yyscanner); - return SCONST; - } -<xus>{quotestop} | -<xus>{quotefail} { - /* throw back all but the quote */ - yyless(1); - /* xusend state looks for possible UESCAPE */ - BEGIN(xusend); + yyextra->state_before_str_stop = YYSTATE; + BEGIN(xqs); } -<xusend>{whitespace} { - /* stay in xusend state over whitespace */ +<xqs>{quotecontinue} { + /* + * Found a quote continuation, so return to the in-quote + * state and continue scanning the literal. + */ + BEGIN(yyextra->state_before_str_stop); } -<xusend><<EOF>> | -<xusend>{other} | -<xusend>{xustop1} { - /* no UESCAPE after the quote, throw back everything */ +<xqs>{quotecontinuefail} | +<xqs><<EOF>> | +<xqs>{other} { + /* + * Failed to see a quote continuation. Throw back + * everything after the end quote, and handle the string + * according to the state we were in previously. + */ yyless(0); BEGIN(INITIAL); - yylval->str = litbuf_udeescape('\\', yyscanner); - return SCONST; - } -<xusend>{xustop2} { - /* found UESCAPE after the end quote */ - BEGIN(INITIAL); - if (!check_uescapechar(yytext[yyleng - 2])) + + switch (yyextra->state_before_str_stop) { - SET_YYLLOC(); - ADVANCE_YYLLOC(yyleng - 2); - yyerror("invalid Unicode escape character"); + case xb: + yylval->str = litbufdup(yyscanner); + return BCONST; + case xh: + yylval->str = litbufdup(yyscanner); + return XCONST; + case xq: + /* fallthrough */ + case xe: + /* + * Check that the data remains valid if it + * might have been made invalid by unescaping + * any chars. + */ + if (yyextra->saw_non_ascii) + pg_verifymbstr(yyextra->literalbuf, + yyextra->literallen, + false); + yylval->str = litbufdup(yyscanner); + return SCONST; + case xus: + yylval->str = litbufdup(yyscanner); + return UCONST; + default: + yyerror("unhandled previous state in xqs"); } - yylval->str = litbuf_udeescape(yytext[yyleng - 2], - yyscanner); - return SCONST; } + <xq,xe,xus>{xqdouble} { addlitchar('\'', yyscanner); } @@ -693,9 +672,6 @@ other . if (c == '\0' || IS_HIGHBIT_SET(c)) yyextra->saw_non_ascii = true; } -<xq,xe,xus>{quotecontinue} { - /* ignore */ - } <xe>. { /* This is only needed for \ just before EOF */ addlitchar(yytext[0], yyscanner); @@ -770,53 +746,14 @@ other . return IDENT; } <xui>{dquote} { - yyless(1); - /* xuiend state looks for possible UESCAPE */ - BEGIN(xuiend); - } -<xuiend>{whitespace} { - /* stay in xuiend state over whitespace */ - } -<xuiend><<EOF>> | -<xuiend>{other} | -<xuiend>{xustop1} { - /* no UESCAPE after the quote, throw back everything */ - char *ident; - int identlen; - - yyless(0); - - BEGIN(INITIAL); if (yyextra->literallen == 0) yyerror("zero-length delimited identifier"); - ident = litbuf_udeescape('\\', yyscanner); - identlen = strlen(ident); - if (identlen >= NAMEDATALEN) - truncate_identifier(ident, identlen, true); - yylval->str = ident; - return IDENT; - } -<xuiend>{xustop2} { - /* found UESCAPE after the end quote */ - char *ident; - int identlen; BEGIN(INITIAL); - if (yyextra->literallen == 0) - yyerror("zero-length delimited identifier"); - if (!check_uescapechar(yytext[yyleng - 2])) - { - SET_YYLLOC(); - ADVANCE_YYLLOC(yyleng - 2); - yyerror("invalid Unicode escape character"); - } - ident = litbuf_udeescape(yytext[yyleng - 2], yyscanner); - identlen = strlen(ident); - if (identlen >= NAMEDATALEN) - truncate_identifier(ident, identlen, true); - yylval->str = ident; - return IDENT; + yylval->str = litbufdup(yyscanner); + return UIDENT; } + <xd,xui>{xddouble} { addlitchar('"', yyscanner); } @@ -1288,55 +1225,12 @@ process_integer_literal(const char *token, YYSTYPE *lval) return ICONST; } -static unsigned int -hexval(unsigned char c) -{ - if (c >= '0' && c <= '9') - return c - '0'; - if (c >= 'a' && c <= 'f') - return c - 'a' + 0xA; - if (c >= 'A' && c <= 'F') - return c - 'A' + 0xA; - elog(ERROR, "invalid hexadecimal digit"); - return 0; /* not reached */ -} - -static void -check_unicode_value(pg_wchar c, char *loc, core_yyscan_t yyscanner) -{ - if (GetDatabaseEncoding() == PG_UTF8) - return; - - if (c > 0x7F) - { - ADVANCE_YYLLOC(loc - yyextra->literalbuf + 3); /* 3 for U&" */ - yyerror("Unicode escape values cannot be used for code point values above 007F when the server encoding is not UTF8"); - } -} - -static bool -is_utf16_surrogate_first(pg_wchar c) -{ - return (c >= 0xD800 && c <= 0xDBFF); -} - -static bool -is_utf16_surrogate_second(pg_wchar c) -{ - return (c >= 0xDC00 && c <= 0xDFFF); -} - -static pg_wchar -surrogate_pair_to_codepoint(pg_wchar first, pg_wchar second) -{ - return ((first & 0x3FF) << 10) + 0x10000 + (second & 0x3FF); -} - static void addunicode(pg_wchar c, core_yyscan_t yyscanner) { char buf[8]; + /* See also check_unicode_value() in parser.c */ if (c == 0 || c > 0x10FFFF) yyerror("invalid Unicode escape value"); if (c > 0x7F) @@ -1349,172 +1243,6 @@ addunicode(pg_wchar c, core_yyscan_t yyscanner) addlit(buf, pg_mblen(buf), yyscanner); } -/* is 'escape' acceptable as Unicode escape character (UESCAPE syntax) ? */ -static bool -check_uescapechar(unsigned char escape) -{ - if (isxdigit(escape) - || escape == '+' - || escape == '\'' - || escape == '"' - || scanner_isspace(escape)) - { - return false; - } - else - return true; -} - -/* like litbufdup, but handle unicode escapes */ -static char * -litbuf_udeescape(unsigned char escape, core_yyscan_t yyscanner) -{ - char *new; - char *litbuf, - *in, - *out; - pg_wchar pair_first = 0; - - /* Make literalbuf null-terminated to simplify the scanning loop */ - litbuf = yyextra->literalbuf; - litbuf[yyextra->literallen] = '\0'; - - /* - * This relies on the subtle assumption that a UTF-8 expansion cannot be - * longer than its escaped representation. - */ - new = palloc(yyextra->literallen + 1); - - in = litbuf; - out = new; - while (*in) - { - if (in[0] == escape) - { - if (in[1] == escape) - { - if (pair_first) - { - ADVANCE_YYLLOC(in - litbuf + 3); /* 3 for U&" */ - yyerror("invalid Unicode surrogate pair"); - } - *out++ = escape; - in += 2; - } - else if (isxdigit((unsigned char) in[1]) && - isxdigit((unsigned char) in[2]) && - isxdigit((unsigned char) in[3]) && - isxdigit((unsigned char) in[4])) - { - pg_wchar unicode; - - unicode = (hexval(in[1]) << 12) + - (hexval(in[2]) << 8) + - (hexval(in[3]) << 4) + - hexval(in[4]); - check_unicode_value(unicode, in, yyscanner); - if (pair_first) - { - if (is_utf16_surrogate_second(unicode)) - { - unicode = surrogate_pair_to_codepoint(pair_first, unicode); - pair_first = 0; - } - else - { - ADVANCE_YYLLOC(in - litbuf + 3); /* 3 for U&" */ - yyerror("invalid Unicode surrogate pair"); - } - } - else if (is_utf16_surrogate_second(unicode)) - yyerror("invalid Unicode surrogate pair"); - - if (is_utf16_surrogate_first(unicode)) - pair_first = unicode; - else - { - unicode_to_utf8(unicode, (unsigned char *) out); - out += pg_mblen(out); - } - in += 5; - } - else if (in[1] == '+' && - isxdigit((unsigned char) in[2]) && - isxdigit((unsigned char) in[3]) && - isxdigit((unsigned char) in[4]) && - isxdigit((unsigned char) in[5]) && - isxdigit((unsigned char) in[6]) && - isxdigit((unsigned char) in[7])) - { - pg_wchar unicode; - - unicode = (hexval(in[2]) << 20) + - (hexval(in[3]) << 16) + - (hexval(in[4]) << 12) + - (hexval(in[5]) << 8) + - (hexval(in[6]) << 4) + - hexval(in[7]); - check_unicode_value(unicode, in, yyscanner); - if (pair_first) - { - if (is_utf16_surrogate_second(unicode)) - { - unicode = surrogate_pair_to_codepoint(pair_first, unicode); - pair_first = 0; - } - else - { - ADVANCE_YYLLOC(in - litbuf + 3); /* 3 for U&" */ - yyerror("invalid Unicode surrogate pair"); - } - } - else if (is_utf16_surrogate_second(unicode)) - yyerror("invalid Unicode surrogate pair"); - - if (is_utf16_surrogate_first(unicode)) - pair_first = unicode; - else - { - unicode_to_utf8(unicode, (unsigned char *) out); - out += pg_mblen(out); - } - in += 8; - } - else - { - ADVANCE_YYLLOC(in - litbuf + 3); /* 3 for U&" */ - yyerror("invalid Unicode escape value"); - } - } - else - { - if (pair_first) - { - ADVANCE_YYLLOC(in - litbuf + 3); /* 3 for U&" */ - yyerror("invalid Unicode surrogate pair"); - } - *out++ = *in++; - } - } - - /* unfinished surrogate pair? */ - if (pair_first) - { - ADVANCE_YYLLOC(in - litbuf + 3); /* 3 for U&" */ - yyerror("invalid Unicode surrogate pair"); - } - - *out = '\0'; - - /* - * We could skip pg_verifymbstr if we didn't process any non-7-bit-ASCII - * codes; but it's probably not worth the trouble, since this isn't likely - * to be a performance-critical path. - */ - pg_verifymbstr(new, out - new, false); - return new; -} - static unsigned char unescape_single_char(unsigned char c, core_yyscan_t yyscanner) { diff --git a/src/include/mb/pg_wchar.h b/src/include/mb/pg_wchar.h index 3e3e6c4..0c4cb9c 100644 --- a/src/include/mb/pg_wchar.h +++ b/src/include/mb/pg_wchar.h @@ -509,6 +509,27 @@ typedef uint32 (*utf_local_conversion_func) (uint32 code); /* + * Some handy functions for Unicode-specific tests. + */ +static inline bool +is_utf16_surrogate_first(pg_wchar c) +{ + return (c >= 0xD800 && c <= 0xDBFF); +} + +static inline bool +is_utf16_surrogate_second(pg_wchar c) +{ + return (c >= 0xDC00 && c <= 0xDFFF); +} + +static inline pg_wchar +surrogate_pair_to_codepoint(pg_wchar first, pg_wchar second) +{ + return ((first & 0x3FF) << 10) + 0x10000 + (second & 0x3FF); +} + +/* * These functions are considered part of libpq's exported API and * are also declared in libpq-fe.h. */ diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h index 00ace84..5893d31 100644 --- a/src/include/parser/kwlist.h +++ b/src/include/parser/kwlist.h @@ -416,6 +416,7 @@ PG_KEYWORD("truncate", TRUNCATE, UNRESERVED_KEYWORD) PG_KEYWORD("trusted", TRUSTED, UNRESERVED_KEYWORD) PG_KEYWORD("type", TYPE_P, UNRESERVED_KEYWORD) PG_KEYWORD("types", TYPES_P, UNRESERVED_KEYWORD) +PG_KEYWORD("uescape", UESCAPE, UNRESERVED_KEYWORD) PG_KEYWORD("unbounded", UNBOUNDED, UNRESERVED_KEYWORD) PG_KEYWORD("uncommitted", UNCOMMITTED, UNRESERVED_KEYWORD) PG_KEYWORD("unencrypted", UNENCRYPTED, UNRESERVED_KEYWORD) diff --git a/src/include/parser/scanner.h b/src/include/parser/scanner.h index 731a2bd..571d5e2 100644 --- a/src/include/parser/scanner.h +++ b/src/include/parser/scanner.h @@ -48,7 +48,7 @@ typedef union core_YYSTYPE * However, those are not defined in this file, because bison insists on * defining them for itself. The token codes used by the core scanner are * the ASCII characters plus these: - * %token <str> IDENT FCONST SCONST BCONST XCONST Op + * %token <str> IDENT UIDENT FCONST SCONST UCONST BCONST XCONST Op * %token <ival> ICONST PARAM * %token TYPECAST DOT_DOT COLON_EQUALS EQUALS_GREATER * %token LESS_EQUALS GREATER_EQUALS NOT_EQUALS @@ -99,6 +99,7 @@ typedef struct core_yy_extra_type int literallen; /* actual current string length */ int literalalloc; /* current allocated buffer size */ + int state_before_str_stop; /* start cond. before end quote */ int xcdepth; /* depth of nesting in slash-star comments */ char *dolqstart; /* current $foo$ quote start string */ diff --git a/src/interfaces/ecpg/preproc/ecpg.tokens b/src/interfaces/ecpg/preproc/ecpg.tokens index 1d613af..749a914 100644 --- a/src/interfaces/ecpg/preproc/ecpg.tokens +++ b/src/interfaces/ecpg/preproc/ecpg.tokens @@ -24,4 +24,4 @@ S_TYPEDEF %token CSTRING CVARIABLE CPP_LINE IP -%token DOLCONST ECONST NCONST UCONST UIDENT +%token DOLCONST ECONST NCONST diff --git a/src/interfaces/ecpg/preproc/ecpg.trailer b/src/interfaces/ecpg/preproc/ecpg.trailer index f58b41e..efad0c0 100644 --- a/src/interfaces/ecpg/preproc/ecpg.trailer +++ b/src/interfaces/ecpg/preproc/ecpg.trailer @@ -1750,7 +1750,6 @@ ecpg_sconst: $$[strlen($1)+3]='\0'; free($1); } - | UCONST { $$ = $1; } | DOLCONST { $$ = $1; } ; @@ -1758,7 +1757,6 @@ ecpg_xconst: XCONST { $$ = make_name(); } ; ecpg_ident: IDENT { $$ = make_name(); } | CSTRING { $$ = make3_str(mm_strdup("\""), $1, mm_strdup("\"")); } - | UIDENT { $$ = $1; } ; quoted_ident_stringvar: name diff --git a/src/interfaces/ecpg/preproc/parse.pl b/src/interfaces/ecpg/preproc/parse.pl index 3619706..dc40b29 100644 --- a/src/interfaces/ecpg/preproc/parse.pl +++ b/src/interfaces/ecpg/preproc/parse.pl @@ -218,8 +218,8 @@ sub main if ($a eq 'IDENT' && $prior eq '%nonassoc') { - # add two more tokens to the list - $str = $str . "\n%nonassoc CSTRING\n%nonassoc UIDENT"; + # add one more tokens to the list + $str = $str . "\n%nonassoc CSTRING"; } $prior = $a; } diff --git a/src/pl/plpgsql/src/pl_gram.y b/src/pl/plpgsql/src/pl_gram.y index 454071a..3cdf928 100644 --- a/src/pl/plpgsql/src/pl_gram.y +++ b/src/pl/plpgsql/src/pl_gram.y @@ -232,7 +232,7 @@ static void check_raise_parameters(PLpgSQL_stmt_raise *stmt); * Some of these are not directly referenced in this file, but they must be * here anyway. */ -%token <str> IDENT FCONST SCONST BCONST XCONST Op +%token <str> IDENT UIDENT FCONST SCONST UCONST BCONST XCONST Op %token <ival> ICONST PARAM %token TYPECAST DOT_DOT COLON_EQUALS EQUALS_GREATER %token LESS_EQUALS GREATER_EQUALS NOT_EQUALS diff --git a/src/test/regress/expected/strings.out b/src/test/regress/expected/strings.out index 6d96843..0716e4f 100644 --- a/src/test/regress/expected/strings.out +++ b/src/test/regress/expected/strings.out @@ -48,17 +48,17 @@ SELECT 'tricky' AS U&"\" UESCAPE '!'; (1 row) SELECT U&'wrong: \061'; -ERROR: invalid Unicode escape value at or near "\061'" +ERROR: invalid Unicode escape value LINE 1: SELECT U&'wrong: \061'; ^ SELECT U&'wrong: \+0061'; -ERROR: invalid Unicode escape value at or near "\+0061'" +ERROR: invalid Unicode escape value LINE 1: SELECT U&'wrong: \+0061'; ^ SELECT U&'wrong: +0061' UESCAPE '+'; -ERROR: invalid Unicode escape character at or near "+'" +ERROR: invalid Unicode escape character at or near "'+'" LINE 1: SELECT U&'wrong: +0061' UESCAPE '+'; - ^ + ^ SET standard_conforming_strings TO off; SELECT U&'d\0061t\+000061' AS U&"d\0061t\+000061"; ERROR: unsafe use of string constant with Unicode escapes
On Tue, Nov 26, 2019 at 5:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote: > > [ My apologies for being so slow to get back to this ] No worries -- it's a nice-to-have, not something our users are excited about. > It struck me though that there's another solution we haven't discussed, > and that's to make the token lookahead filter in parser.c do the work > of converting UIDENT [UESCAPE SCONST] to IDENT, and similarly for the > string case. I recently tried again to get gram.y to handle it without precedence hacks (or at least hacks with less mystery) and came to the conclusion that maybe it just doesn't belong in the grammar after all. I hadn't thought of any alternatives, so thanks for working on that! It seems something is not quite right in v9 with the error position reporting: SELECT U&'wrong: +0061' UESCAPE '+'; ERROR: invalid Unicode escape character at or near "'+'" LINE 1: SELECT U&'wrong: +0061' UESCAPE '+'; - ^ + ^ The caret is not pointing to the third token, or the second for that matter. What worked for me was un-truncating the current token before calling yylex again. To see if I'm on the right track, I've included this in the attached, which applies on top of your v9. > Generally, I'm pretty happy with this approach: it touches gram.y > hardly at all, and it removes just about all of the complexity from > scan.l. I'm happier about dropping the support code into parser.c > than the other choices we've discussed. Seems like the best of both worlds. If we ever wanted to ditch the whole token filter and use Bison's %glr mode, we'd have extra work to do, but there doesn't seem to be a rush to do so anyway. > There's still undone work here, though: > > * I did not touch psql. Probably your patch is fine for that. > > * I did not do more with ecpg than get it to compile, using the > same hacks as in your v7. It still fails its regression tests, > but now the reason is that what we've done in parser/parser.c > needs to be transposed into the identical functionality in > ecpg/preproc/parser.c. Or at least some kind of functionality > there. A problem with this approach is that it presumes we can > reduce a UIDENT sequence to a plain IDENT, but to do so we need > assumptions about the target encoding, and I'm not sure that > ecpg should make any such assumptions. Maybe ecpg should just > reject all cases that produce non-ASCII identifiers? (Probably > it could be made to do something smarter with more work, but > it's not clear to me that it's worth the trouble.) Hmm, I thought we only allowed Unicode escapes in the first place if the server encoding was UTF-8. Or did you mean something else? > If this seems like a reasonable approach to you, please fill in > the missing psql and ecpg bits. Will do. -- John Naylor https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment
John Naylor <john.naylor@2ndquadrant.com> writes: > It seems something is not quite right in v9 with the error position reporting: > SELECT U&'wrong: +0061' UESCAPE '+'; > ERROR: invalid Unicode escape character at or near "'+'" > LINE 1: SELECT U&'wrong: +0061' UESCAPE '+'; > - ^ > + ^ > The caret is not pointing to the third token, or the second for that > matter. Interesting. For me it points at the third token with or without your fix ... some flex version discrepancy maybe? Anyway, I have no objection to your fix; it's probably cleaner than what I had. >> * I did not do more with ecpg than get it to compile, using the >> same hacks as in your v7. It still fails its regression tests, >> but now the reason is that what we've done in parser/parser.c >> needs to be transposed into the identical functionality in >> ecpg/preproc/parser.c. Or at least some kind of functionality >> there. A problem with this approach is that it presumes we can >> reduce a UIDENT sequence to a plain IDENT, but to do so we need >> assumptions about the target encoding, and I'm not sure that >> ecpg should make any such assumptions. Maybe ecpg should just >> reject all cases that produce non-ASCII identifiers? (Probably >> it could be made to do something smarter with more work, but >> it's not clear to me that it's worth the trouble.) > Hmm, I thought we only allowed Unicode escapes in the first place if > the server encoding was UTF-8. Or did you mean something else? Well, yeah, but the problem here is that ecpg would have to assume that the client encoding that its output program will be executed with is UTF-8. That seems pretty action-at-a-distance-y. I haven't looked closely at what ecpg does with the processed identifiers. If it just spits them out as-is, a possible solution is to not do anything about de-escaping, but pass the sequence U&"..." (plus UESCAPE ... if any), just like that, on to the grammar as the value of the IDENT token. BTW, in the back of my mind here is Chapman's point that it'd be a large step forward in usability if we allowed Unicode escapes when the backend encoding is *not* UTF-8. I think I see how to get there once this patch is done, so I definitely would not like to introduce some comparable restriction in ecpg. regards, tom lane
On Tue, Nov 26, 2019 at 10:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote: > I haven't looked closely at what ecpg does with the processed > identifiers. If it just spits them out as-is, a possible solution > is to not do anything about de-escaping, but pass the sequence > U&"..." (plus UESCAPE ... if any), just like that, on to the grammar > as the value of the IDENT token. It does pass them along as-is, so I did it that way. In the attached v10, I've synced both ECPG and psql. > * I haven't convinced myself either way as to whether it'd be > better to factor out the code duplicated between the UIDENT > and UCONST cases in base_yylex. I chose to factor it out, since we have 2 versions of parser.c, and this way was much easier to work with. Some notes: I arranged for the ECPG grammar to only see SCONST and IDENT. With UCONST and UIDENT out of the way, it was a small additional step to put all string reconstruction into the lexer, which has the advantage of allowing removal of the other special-case ECPG string tokens as well. The fewer special cases involved in pasting the grammar together, the better. In doing so, I've probably introduced memory leaks, but I wanted to get your opinion on the overall approach before investigating. In ECPG's parser.c, I simply copied check_uescapechar() and ecpg_isspace(), but we could find a common place if desired. During development, I found that this file replicates the location-tracking logic in the backend, but doesn't seem to make use of it. I also would have had to replicate the backend's datatype for YYLTYPE. Fixing that might be worthwhile some day, but to get this working, I just ripped out the extra location tracking. I no longer use state variables to track scanner state, and in fact I removed the existing "state_before" variable in ECPG. Instead, I used the Flex builtins yy_push_state(), yy_pop_state(), and yy_top_state(). These have been a feature for a long time, it seems, so I think we're okay as far as portability. I think it's cleaner this way, and possibly faster. I also used this to reunite the xcc and xcsql states. This whole part could be split out into a separate refactoring patch to be applied first, if desired. -- John Naylor https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment
I wrote: > I no longer use state variables to track scanner state, and in fact I > removed the existing "state_before" variable in ECPG. Instead, I used > the Flex builtins yy_push_state(), yy_pop_state(), and yy_top_state(). > These have been a feature for a long time, it seems, so I think we're > okay as far as portability. I think it's cleaner this way, and > possibly faster. I thought I should get some actual numbers to test, and the results are encouraging: master v10 info 1.56s 1.51s str 1.18s 1.14s unicode 1.33s 1.34s uescape 1.44s 1.58s -- John Naylor https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
John Naylor <john.naylor@2ndquadrant.com> writes: >> I no longer use state variables to track scanner state, and in fact I >> removed the existing "state_before" variable in ECPG. Instead, I used >> the Flex builtins yy_push_state(), yy_pop_state(), and yy_top_state(). >> These have been a feature for a long time, it seems, so I think we're >> okay as far as portability. I think it's cleaner this way, and >> possibly faster. Hmm ... after a bit of research I agree that these functions are not a portability hazard. They are present at least as far back as flex 2.5.33 which is as old as we've got in the buildfarm. However, I'm less excited about them from a performance standpoint. The BEGIN() macro expands to (ordinarily) yyg->yy_start = integer-constant which is surely pretty cheap. However, yy_push_state is substantially more expensive than that, not least because the first invocation in a parse cycle will involve a malloc() or palloc(). Likewise yy_pop_state is multiple times more expensive than plain BEGIN(). Now, I agree that this is negligible for ECPG's usage, so if pushing/popping state is helpful there, let's go for it. But I am not convinced it's negligible for the backend, and I also don't see that we actually need to track any nested scanner states there. So I'd rather stick to using BEGIN in the backend. Not sure about psql. BTW, while looking through the latest patch it struck me that "UCONST" is an underspecified and potentially confusing name. It doesn't indicate what kind of constant we're talking about, for instance a C programmer could be forgiven for thinking it means something like "123U". What do you think of "USCONST", following UIDENT's lead of prefixing U onto whatever the underlying token type is? regards, tom lane
On Mon, Jan 13, 2020 at 7:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote: > > Hmm ... after a bit of research I agree that these functions are not > a portability hazard. They are present at least as far back as flex > 2.5.33 which is as old as we've got in the buildfarm. > > However, I'm less excited about them from a performance standpoint. > The BEGIN() macro expands to (ordinarily) > > yyg->yy_start = integer-constant > > which is surely pretty cheap. However, yy_push_state is substantially > more expensive than that, not least because the first invocation in > a parse cycle will involve a malloc() or palloc(). Likewise yy_pop_state > is multiple times more expensive than plain BEGIN(). > > Now, I agree that this is negligible for ECPG's usage, so if > pushing/popping state is helpful there, let's go for it. But I am > not convinced it's negligible for the backend, and I also don't > see that we actually need to track any nested scanner states there. > So I'd rather stick to using BEGIN in the backend. Not sure about > psql. Okay, removed in v11. The advantage of stack functions in ECPG was to avoid having the two variables state_before_str_start and state_before_str_stop. But if we don't use stack functions in the backend, then consistency wins in my mind. Plus, it was easier for me to revert the stack functions for all 3 scanners. > BTW, while looking through the latest patch it struck me that > "UCONST" is an underspecified and potentially confusing name. > It doesn't indicate what kind of constant we're talking about, > for instance a C programmer could be forgiven for thinking > it means something like "123U". What do you think of "USCONST", > following UIDENT's lead of prefixing U onto whatever the > underlying token type is? Makes perfect sense. Grepping through the source tree, indeed it seems the replication command scanner is using UCONST for digits. Some other cosmetic adjustments in ECPG parser.c: -Previously I had a WIP comment in about 2 functions that are copies from elsewhere. In v11 I just noted that they are copied. -I thought it'd be nicer if ECPG spelled UESCAPE in caps when reconstructing the string. -Corrected copy-paste-o in comment Also: -reverted some spurious whitespace changes -revised scan.l comment about the performance benefits of no backtracking -split the ECPG C-comment scanning cleanup into a separate patch, as I did for v6. I include it here since it's related (merging scanner states), but not relevant to making the core scanner smaller. -wrote draft commit messages -- John Naylor https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment
John Naylor <john.naylor@2ndquadrant.com> writes: > [ v11 patch ] I pushed this with some small cosmetic adjustments. One non-cosmetic adjustment I experimented with was to change str_udeescape() to overwrite the source string in-place, since we know that's modifiable storage and de-escaping can't make the string longer. I reasoned that saving a palloc() might help reduce the extra cost of UESCAPE processing. It didn't seem to move the needle much though, so I didn't commit it that way. A positive reason to keep the API as it stands is that if we do something about the idea of allowing Unicode strings in non-UTF8 backend encodings, that'd likely break the assumption about how the string can't get longer. I'm about to go off and look at the non-UTF8 idea, btw. regards, tom lane
On Tue, Jan 14, 2020 at 4:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote: > > John Naylor <john.naylor@2ndquadrant.com> writes: > > [ v11 patch ] > > I pushed this with some small cosmetic adjustments. Thanks for your help hacking on the token filter. -- John Naylor https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services