Heikki Linnakangas <hlinnakangas@vmware.com> writes:
> He *is* using UTF-8. Or trying to, anyway :-). The downcasing in the
> backend is supposed to leave bytes with the high-bit set alone, ie. in
> UTF-8 encoding, it's supposed to leave ä and ß alone.
Well, actually, downcase_truncate_identifier() is doing this:
unsigned char ch = (unsigned char) ident[i];
if (ch >= 'A' && ch <= 'Z') ch += 'a' - 'A'; else if (IS_HIGHBIT_SET(ch) && isupper(ch)) ch =
tolower(ch);
There's basically no way that that second case can give pleasant results
in a multibyte encoding, other than by not doing anything. I suspect
that Windows' libc has fewer defenses than other implementations and
performs some transformation that we don't get elsewhere. This may also
explain the gripe yesterday in -general about funny results in OS X.
We talked about this before and went off into the weeds about whether
it was sensible to try to use towlower() and whether that wouldn't
create undesirably platform-sensitive results. I wonder though if we
couldn't just fix this code to not do anything to high-bit-set bytes
in multibyte encodings.
regards, tom lane