Thread: [rfc] unicode escapes for extended strings
Seems I'm bad at communicating in english, so here is C variant of my proposal to bring \u escaping into extended strings. Reasons: - More people are familiar with \u escaping, as it's standard in Java/C#/Python, probably more.. - U& strings will not work when stdstr=off. Syntax: \uXXXX - 16-bit value \UXXXXXXXX - 32-bit value Additionally, both \u and \U can be used to specify UTF-16 surrogate pairs to encode characters with value > 0xFFFF. This is exact behaviour used by Java/C#/Python. (except that Java does not have \U) I'm ok with this patch left to 8.5. -- marko
Attachment
On Thu, Apr 16, 2009 at 08:48:58PM +0300, Marko Kreen wrote: > Seems I'm bad at communicating in english, I hope you're not saying this because of my misunderstandings! > so here is C variant of > my proposal to bring \u escaping into extended strings. Reasons: > > - More people are familiar with \u escaping, as it's standard > in Java/C#/Python, probably more.. > - U& strings will not work when stdstr=off. > > Syntax: > > \uXXXX - 16-bit value > \UXXXXXXXX - 32-bit value > > Additionally, both \u and \U can be used to specify UTF-16 surrogate > pairs to encode characters with value > 0xFFFF. This is exact behaviour > used by Java/C#/Python. (except that Java does not have \U) Are you sure that this handling of surrogates is correct? The best answer I've managed to find on the Unicode consortium's site is: http://unicode.org/faq/utf_bom.html#utf16-7 it says: They are invalid in interchange, but may be freely used internal to an implementation. I think this means they consider the handling of them you noted above, in other languages, to be an error. -- Sam http://samason.me.uk/
Sam Mason wrote: > Are you sure that this handling of surrogates is correct? The best > answer I've managed to find on the Unicode consortium's site is: > > http://unicode.org/faq/utf_bom.html#utf16-7 > > it says: > > They are invalid in interchange, but may be freely used internal to an > implementation. > It says that about non-characters, not about the use of surrogate pairs, unless I am misreading it. cheers andrew
On Thu, Apr 16, 2009 at 03:04:37PM -0400, Andrew Dunstan wrote: > Sam Mason wrote: > >Are you sure that this handling of surrogates is correct? The best > >answer I've managed to find on the Unicode consortium's site is: > > > > http://unicode.org/faq/utf_bom.html#utf16-7 > > > >it says: > > > > They are invalid in interchange, but may be freely used internal to an > > implementation. > > It says that about non-characters, not about the use of surrogate pairs, > unless I am misreading it. No, I think you're probably right and I was misreading it. I went back and forth several times to explicitly check I was interpreting this correctly and still failed to get it right. Not sure what I was thinking and sorry for the hassle Marko! I've already asked on the Unicode list about this (no response yet), but I have a feeling I'm getting worked up over nothing. -- Sam http://samason.me.uk/
On 4/16/09, Sam Mason <sam@samason.me.uk> wrote: > On Thu, Apr 16, 2009 at 08:48:58PM +0300, Marko Kreen wrote: > > Seems I'm bad at communicating in english, > > > I hope you're not saying this because of my misunderstandings! > > > > so here is C variant of > > my proposal to bring \u escaping into extended strings. Reasons: > > > > - More people are familiar with \u escaping, as it's standard > > in Java/C#/Python, probably more.. > > - U& strings will not work when stdstr=off. > > > > Syntax: > > > > \uXXXX - 16-bit value > > \UXXXXXXXX - 32-bit value > > > > Additionally, both \u and \U can be used to specify UTF-16 surrogate > > pairs to encode characters with value > 0xFFFF. This is exact behaviour > > used by Java/C#/Python. (except that Java does not have \U) > > > Are you sure that this handling of surrogates is correct? The best > answer I've managed to find on the Unicode consortium's site is: > > http://unicode.org/faq/utf_bom.html#utf16-7 > > it says: > > They are invalid in interchange, but may be freely used internal to an > implementation. > > I think this means they consider the handling of them you noted above, > in other languages, to be an error. It's up to UTF8 validator whether to consider non-characters as error. -- marko
On 4/16/09, Marko Kreen <markokr@gmail.com> wrote: > It's up to UTF8 validator whether to consider non-characters as error. I checked, and it did not work well, as addunicode() did not set the saw_high_bit variable.when outputting UTF8. Attached patch fixes it. Currently is would be NOP as pg_verifymbstr() only checks for invalid UTF8, and addunicode cannot output it, but in the future we may want to reject some codes, so now it can. Btw, is there any good reason why we don't reject \000, \x00 in text strings? Currently I made addunicode() do it, because it seems sensible. -- marko
Attachment
On Fri, Apr 17, 2009 at 07:07:31PM +0300, Marko Kreen wrote: > Btw, is there any good reason why we don't reject \000, \x00 > in text strings? Why forbid nulls in text strings? Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > Please line up in a tree and maintain the heap invariant while > boarding. Thank you for flying nlogn airlines.
On Fri, Apr 17, 2009 at 07:01:47PM +0200, Martijn van Oosterhout wrote: > On Fri, Apr 17, 2009 at 07:07:31PM +0300, Marko Kreen wrote: > > Btw, is there any good reason why we don't reject \000, \x00 > > in text strings? > > Why forbid nulls in text strings? As far as I know, PG assumes, like most C code, that strings don't contain embedded NUL characters. The manual[1] has this to says: The character with the code zero cannot be in a string constant. I believe you're supposed to use values of type "bytea" when you're expecting to deal with NUL characters. -- Sam http://samason.me.uk/[1] http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-STRINGS
Marko Kreen wrote: > + if (c > 0x7F) > + { > + if (GetDatabaseEncoding() != PG_UTF8) > + yyerror("Unicode escape values cannot be used for code point values above 007F when the server encoding isnot UTF8"); > + saw_high_bit = true; > + } > Is that really what we want to do? ISTM that one of the uses of this is to say "store the character that corresponds to this Unicode code point in whatever the database encoding is", so that \u00a9 would become an encoding independent way of designating the copyright symbol, for instance. cheers andrew
Andrew Dunstan <andrew@dunslane.net> wrote: > ISTM that one of the uses of this is to say "store the character > that corresponds to this Unicode code point in whatever the database > encoding is" I would think you're right. As long as the given character is in the user's character set, we should allow it. Presumably we've already confirmed that they have an encoding scheme which allows them to store everything in their character set. -Kevin
On 4/17/09, Kevin Grittner <Kevin.Grittner@wicourts.gov> wrote: > Andrew Dunstan <andrew@dunslane.net> wrote: > > ISTM that one of the uses of this is to say "store the character > > that corresponds to this Unicode code point in whatever the database > > encoding is" > > I would think you're right. As long as the given character is in the > user's character set, we should allow it. Presumably we've already > confirmed that they have an encoding scheme which allows them to store > everything in their character set. It is probably good idea, but currently I just followed what the U& strings do. I can change my patch to do it, but it is probably more urgent in U& case to decide whether they should work in other encodings too. -- marko
Marko Kreen wrote: > On 4/17/09, Kevin Grittner <Kevin.Grittner@wicourts.gov> wrote: > >> Andrew Dunstan <andrew@dunslane.net> wrote: >> > ISTM that one of the uses of this is to say "store the character >> > that corresponds to this Unicode code point in whatever the database >> > encoding is" >> >> I would think you're right. As long as the given character is in the >> user's character set, we should allow it. Presumably we've already >> confirmed that they have an encoding scheme which allows them to store >> everything in their character set. >> > > It is probably good idea, but currently I just followed what the U& > strings do. > > I can change my patch to do it, but it is probably more urgent in U& > case to decide whether they should work in other encodings too. > > Indeed. What does the standard say about the behaviour of U&'' ? cheers andrew
"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes: > Andrew Dunstan <andrew@dunslane.net> wrote: >> ISTM that one of the uses of this is to say "store the character >> that corresponds to this Unicode code point in whatever the database >> encoding is" > I would think you're right. As long as the given character is in the > user's character set, we should allow it. Presumably we've already > confirmed that they have an encoding scheme which allows them to store > everything in their character set. This is a good way to get your patch rejected altogether. The lexer is *not* allowed to invoke any database operations (such as pg_conversion lookups) so it cannot perform arbitrary encoding conversions. If this sort of facility is what you want, the previously suggested approach via a decode-like runtime function is a better fit. regards, tom lane
Sam Mason <sam@samason.me.uk> writes: > On Fri, Apr 17, 2009 at 07:01:47PM +0200, Martijn van Oosterhout wrote: >> On Fri, Apr 17, 2009 at 07:07:31PM +0300, Marko Kreen wrote: >>> Btw, is there any good reason why we don't reject \000, \x00 >>> in text strings? >> >> Why forbid nulls in text strings? > As far as I know, PG assumes, like most C code, that strings don't > contain embedded NUL characters. Yeah; we should reject them because nothing will behave very sensibly with them, eg regression=# select E'abc\000xyz';?column? ----------abc (1 row) The point has come up before, and I kinda thought we *had* changed the lexer to reject \000. I see we haven't though. Curiously, this does fail: regression=# select U&'abc\0000xyz'; ERROR: invalid byte sequence for encoding "SQL_ASCII": 0x00 HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlledby "client_encoding". though that's not quite the message I'd have expected to see. regards, tom lane
On 4/18/09, Tom Lane <tgl@sss.pgh.pa.us> wrote: > "Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes: > > Andrew Dunstan <andrew@dunslane.net> wrote: > >> ISTM that one of the uses of this is to say "store the character > >> that corresponds to this Unicode code point in whatever the database > >> encoding is" > > > I would think you're right. As long as the given character is in the > > user's character set, we should allow it. Presumably we've already > > confirmed that they have an encoding scheme which allows them to store > > everything in their character set. > > > This is a good way to get your patch rejected altogether. The lexer > is *not* allowed to invoke any database operations (such as > pg_conversion lookups) so it cannot perform arbitrary encoding > conversions. Ok. I was just thinking that if such conversion can be provided easily, it should be done. But if not, then no need to make things complex. Seems the proper way to look at it is that unicode escapes have straightforward meaning only in UTF8 encoding. So it should be fine to limit them in other encodings to ascii. > If this sort of facility is what you want, the previously suggested > approach via a decode-like runtime function is a better fit. I'm a UTF8-only kind on guy, so people who actually have experience of using other encodings must comment on that one. -- marko
Tom Lane <tgl@sss.pgh.pa.us> wrote: > The lexer is *not* allowed to invoke any database operations > (such as pg_conversion lookups) I certainly hope it's not! > so it cannot perform arbitrary encoding conversions. I was more questioning whether we should be looking at character encodings at all at that point, rather than suggesting conversions between different ones. If committing the escape sequence to a particular encoding is unavoidable at that point, then I suppose the code in question is about as good as it gets. -Kevin
On 4/18/09, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Sam Mason <sam@samason.me.uk> writes: > > On Fri, Apr 17, 2009 at 07:01:47PM +0200, Martijn van Oosterhout wrote: > >> On Fri, Apr 17, 2009 at 07:07:31PM +0300, Marko Kreen wrote: > >>> Btw, is there any good reason why we don't reject \000, \x00 > >>> in text strings? > >> > >> Why forbid nulls in text strings? > > > As far as I know, PG assumes, like most C code, that strings don't > > contain embedded NUL characters. > > > Yeah; we should reject them because nothing will behave very sensibly > with them, eg > > regression=# select E'abc\000xyz'; > ?column? > ---------- > abc > (1 row) > > The point has come up before, and I kinda thought we *had* changed the > lexer to reject \000. I see we haven't though. Curiously, this > does fail: > > regression=# select U&'abc\0000xyz'; > ERROR: invalid byte sequence for encoding "SQL_ASCII": 0x00 > HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlledby "client_encoding". > > though that's not quite the message I'd have expected to see. I think that's because out verifier actually *does* reject \0, only problem is that \0 does not set saw_high_bit flag, so the verifier simply does not get executed. But U& executes it always. unicode=# SELECT e'\xc3\xa4';?column? ----------ä (1 row) unicode=# SELECT e'\xc3\xa4\x00'; ERROR: invalid byte sequence for encoding "UTF8": 0x00 HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". Heh. -- marko
Marko Kreen <markokr@gmail.com> writes: > On 4/18/09, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> The point has come up before, and I kinda thought we *had* changed the >> lexer to reject \000. I see we haven't though. Curiously, this >> does fail: >> >> regression=# select U&'abc\0000xyz'; >> ERROR: invalid byte sequence for encoding "SQL_ASCII": 0x00 > I think that's because out verifier actually *does* reject \0, > only problem is that \0 does not set saw_high_bit flag, > so the verifier simply does not get executed. > But U& executes it always. I fixed this in HEAD. regards, tom lane
Unicode escapes for extended strings. On 4/16/09, Marko Kreen <markokr@gmail.com> wrote: > Reasons: > > - More people are familiar with \u escaping, as it's standard > in Java/C#/Python, probably more.. > - U& strings will not work when stdstr=off. > > Syntax: > > \uXXXX - 16-bit value > \UXXXXXXXX - 32-bit value > > Additionally, both \u and \U can be used to specify UTF-16 surrogate > pairs to encode characters with value > 0xFFFF. This is exact behaviour > used by Java/C#/Python. (except that Java does not have \U) v3 of the patch: - convert to new reentrant lexer API - add lexer targets to avoid fallback to default - completely disallow \U\u without proper number of hex values - fix logic bug in surrogate pair handling -- marko
Attachment
On Wed, 2009-09-09 at 18:26 +0300, Marko Kreen wrote: > Unicode escapes for extended strings. > > On 4/16/09, Marko Kreen <markokr@gmail.com> wrote: > > Reasons: > > > > - More people are familiar with \u escaping, as it's standard > > in Java/C#/Python, probably more.. > > - U& strings will not work when stdstr=off. > > > > Syntax: > > > > \uXXXX - 16-bit value > > \UXXXXXXXX - 32-bit value > > > > Additionally, both \u and \U can be used to specify UTF-16 surrogate > > pairs to encode characters with value > 0xFFFF. This is exact behaviour > > used by Java/C#/Python. (except that Java does not have \U) > > v3 of the patch: > > - convert to new reentrant lexer API > - add lexer targets to avoid fallback to default > - completely disallow \U\u without proper number of hex values > - fix logic bug in surrogate pair handling This looks good to me. I'm implementing the surrogate pair handling for the U& syntax for consistency. Then I'll apply this.
On Wed, 2009-09-09 at 18:26 +0300, Marko Kreen wrote: > Unicode escapes for extended strings. > > On 4/16/09, Marko Kreen <markokr@gmail.com> wrote: > > Reasons: > > > > - More people are familiar with \u escaping, as it's standard > > in Java/C#/Python, probably more.. > > - U& strings will not work when stdstr=off. > > > > Syntax: > > > > \uXXXX - 16-bit value > > \UXXXXXXXX - 32-bit value > > > > Additionally, both \u and \U can be used to specify UTF-16 surrogate > > pairs to encode characters with value > 0xFFFF. This is exact behaviour > > used by Java/C#/Python. (except that Java does not have \U) > > v3 of the patch: > > - convert to new reentrant lexer API > - add lexer targets to avoid fallback to default > - completely disallow \U\u without proper number of hex values > - fix logic bug in surrogate pair handling Committed.
On 9/23/09, Peter Eisentraut <peter_e@gmx.net> wrote: > On Wed, 2009-09-09 at 18:26 +0300, Marko Kreen wrote: > > Unicode escapes for extended strings. > > Committed. Thank you for handling the patch. I looked at your code for U& and saw that you allow standalone second half of the surrogate pair there, although you error out on first half. Was that deliberate? Standalone surrogate halfs cause headaches for anything that wants to handle data in UTF16. The area 0xD800-0xDFFF is explicitly reserved for UTF16 encoding and does not contain any valid Unicode codepoints. Perhaps pg_verifymbstr() should be made to check for such values, because even if we fix the escaping code, such data can still be inserted via plain utf8 or \x escapes? -- marko
On Wed, 2009-09-23 at 22:46 +0300, Marko Kreen wrote: > I looked at your code for U& and saw that you allow standalone > second half of the surrogate pair there, although you error > out on first half. Was that deliberate? No. > Perhaps pg_verifymbstr() should be made to check for such values, > because even if we fix the escaping code, such data can still be > inserted via plain utf8 or \x escapes? Good idea. This could also check for other invalid things like byte-order marks in UTF-8.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Thu, Sep 24, 2009 at 09:42:32PM +0300, Peter Eisentraut wrote: > On Wed, 2009-09-23 at 22:46 +0300, Marko Kreen wrote: [...] > Good idea. This could also check for other invalid things like > byte-order marks in UTF-8. But watch out. Microsoft apps do like to insert a BOM at the beginning of the text. Not that I think it's a good idea, but the Unicode folks seem to think its OK [1] :-( <http://unicode.org/faq/utf_bom.html#bom5> Regards - -- tomás -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFKvFtCBcgs9XrR2kYRAsHXAJ9lpaqZ2IFKGwZd+H3Ej6H+m44vpgCeLe7n vc+ciE1N5AqOre3DmvwKaNI= =UTBQ -----END PGP SIGNATURE-----
On 9/25/09, tomas@tuxteam.de <tomas@tuxteam.de> wrote: > On Thu, Sep 24, 2009 at 09:42:32PM +0300, Peter Eisentraut wrote: > > Good idea. This could also check for other invalid things like > > byte-order marks in UTF-8. > > But watch out. Microsoft apps do like to insert a BOM at the beginning > of the text. Not that I think it's a good idea, but the Unicode folks > seem to think its OK [1] :-( As BOM does not actively break transport layers, it's less clear-cut whether to reject it. It could be said that BOM at the start of string is OK. BOM at the middle of string is more rejectable. But it will only confuse some high-level character counters, not low-level encoders. -- marko
Marko Kreen wrote: > On 9/25/09, tomas@tuxteam.de <tomas@tuxteam.de> wrote: > >> On Thu, Sep 24, 2009 at 09:42:32PM +0300, Peter Eisentraut wrote: >> > Good idea. This could also check for other invalid things like >> > byte-order marks in UTF-8. >> >> But watch out. Microsoft apps do like to insert a BOM at the beginning >> of the text. Not that I think it's a good idea, but the Unicode folks >> seem to think its OK [1] :-( >> > > As BOM does not actively break transport layers, it's less clear-cut > whether to reject it. It could be said that BOM at the start of string > is OK. BOM at the middle of string is more rejectable. But it will > only confuse some high-level character counters, not low-level encoders. > > It seems pretty clear from the URL that Tomas posted that we should not treat a BOM specially at all, and just treat it as another Unicode char. cheers andrew