Re: [HACKERS] UNICODE characters above 0x10000 - Mailing list pgsql-patches

From Tatsuo Ishii
Subject Re: [HACKERS] UNICODE characters above 0x10000
Date
Msg-id 20040808.111759.02304017.t-ishii@sra.co.jp
Whole thread Raw
In response to Re: [HACKERS] UNICODE characters above 0x10000  (Oliver Jowett <oliver@opencloud.com>)
Responses Re: [HACKERS] UNICODE characters above 0x10000  (Oliver Jowett <oliver@opencloud.com>)
List pgsql-patches
> Tom Lane wrote:
>
> > If I understood what I was reading, this would take several things:
> > * Remove the "special UTF-8 check" in pg_verifymbstr;
> > * Extend pg_utf2wchar_with_len and pg_utf_mblen to handle the 4-byte case;
> > * Set maxmblen to 4 in the pg_wchar_table[] entry for UTF-8.
> >
> > Are there any other places that would have to change?  Would this break
> > anything?  The testing aspect is what's bothering me at the moment.
>
> Does this change what client_encoding = UNICODE might produce? The JDBC
> driver will need some tweaking to handle this -- Java uses UTF-16
> internally and I think some supplementary character (?) scheme for
> values above 0xffff as of JDK 1.5.

Java doesn't handle UCS above 0xffff? I didn't know that. As long as
you put in/out JDBC, it shouldn't be a problem. However if other APIs
put in such a data, you will get into trouble...
--
Tatsuo Ishii

pgsql-patches by date:

Previous
From: Tom Lane
Date:
Subject: Re: Tutorial patch
Next
From: Tom Lane
Date:
Subject: Re: [HACKERS] UNICODE characters above 0x10000