Thread: Character Encoding problem

Character Encoding problem

From
"antony baxter"
Date:
Hi,

I'm having a character set problem, and I wonder if anyone here could
sanity check what I'm doing. It might well be that the problem lies
elsewhere.

My database was created with -E UNICODE, and when I do a \l in psql it
is listed as UTF8.

My Java application is receiving data over a socket which is encoded
in UTF8. I'm logging this and it is displaying e.g. Cyrillic or Greek
correctly (using OSX Terminal.app which supports UTF8, tailing the log
with 'less' and the environment variable LESSCHARSET=utf-8.

I'm persisting this data using the latest 8.3 JDBC drivers into
PostgreSQL 8.3.0. I'm not changing the client_encoding (I tried, but I
understand that the JDBC drivers set it to UNICODE anyway, and throw
an error if I attempt to change it to anything else). The data writes
fine, and if I then do a SELECT and a resultSet.getString(x) and write
the output to the log, everything still looks fine. I'm therefore
satisfied that Java + JDBC drivers + PostgreSQL are able to write &
read the data fine.  So far so good.

However, if using psql I try to look at the data, it is mangled. If I
try a manual UPDATE via psql using the data cut'n'pasted from my log,
and then look at the data, it reads correctly. Therefore I know that
psql is capable of reading and writing UTF8 data correctly. Also, the
client application that reads from my database is Perl, and this also
retrieves mangled data; we've tried writing and reading directly from
Perl, and in this case reviewing the data in psql looks normal.

The conclusion I've reached is that Java + JDBC is not actually
persisting the data in UTF-8; is that correct or am I wildly off base,
and if it is correct then is there anything I can do about it?!

Many thanks,

Ant.

Re: Character Encoding problem

From
"antony baxter"
Date:
One thing I forgot to add; I also tried e.g.:

  ps.setString(1, new
String(Charset.forName("UTF-8").encode(myString).array(), "UTF-8"));

to be absolutely certain that I was passing UTF-8 to the database; this threw a

22047 [Thread-2] DEBUG com.test.database.postgresql.Dao  - PSQL
Exception State: 22021
22047 [Thread-2] DEBUG com.test.database.postgresql.Dao  - PSQL
Exception Message: invalid byte sequence for encoding "UTF8": 0x00
22051 [Thread-2] ERROR com.test.database.postgresql.Dao  - Error Storing Data:
org.postgresql.util.PSQLException: ERROR: invalid byte sequence for
encoding "UTF8": 0x00
   at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592)
   at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327)
   at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:192)
   at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:451)
   at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:350)
   at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:343)
   at com.test.database.postgresql.Dao.store(Dao.java:197)
   ...

I presume that this is because the JDBC driver is expecting the JVM's
internal UTF16 String representation?

Ant



On Mon, Apr 7, 2008 at 8:29 AM, antony baxter <antony.baxter@gmail.com> wrote:
> Hi,
>
>  I'm having a character set problem, and I wonder if anyone here could
>  sanity check what I'm doing. It might well be that the problem lies
>  elsewhere.
>
>  My database was created with -E UNICODE, and when I do a \l in psql it
>  is listed as UTF8.
>
>  My Java application is receiving data over a socket which is encoded
>  in UTF8. I'm logging this and it is displaying e.g. Cyrillic or Greek
>  correctly (using OSX Terminal.app which supports UTF8, tailing the log
>  with 'less' and the environment variable LESSCHARSET=utf-8.
>
>  I'm persisting this data using the latest 8.3 JDBC drivers into
>  PostgreSQL 8.3.0. I'm not changing the client_encoding (I tried, but I
>  understand that the JDBC drivers set it to UNICODE anyway, and throw
>  an error if I attempt to change it to anything else). The data writes
>  fine, and if I then do a SELECT and a resultSet.getString(x) and write
>  the output to the log, everything still looks fine. I'm therefore
>  satisfied that Java + JDBC drivers + PostgreSQL are able to write &
>  read the data fine.  So far so good.
>
>  However, if using psql I try to look at the data, it is mangled. If I
>  try a manual UPDATE via psql using the data cut'n'pasted from my log,
>  and then look at the data, it reads correctly. Therefore I know that
>  psql is capable of reading and writing UTF8 data correctly. Also, the
>  client application that reads from my database is Perl, and this also
>  retrieves mangled data; we've tried writing and reading directly from
>  Perl, and in this case reviewing the data in psql looks normal.
>
>  The conclusion I've reached is that Java + JDBC is not actually
>  persisting the data in UTF-8; is that correct or am I wildly off base,
>  and if it is correct then is there anything I can do about it?!
>
>  Many thanks,
>
>  Ant.
>

Re: Character Encoding problem

From
Craig Ringer
Date:
antony baxter wrote:
> However, if using psql I try to look at the data, it is mangled.
What's your terminal encoding? The easiest way to find out is to look at
the output of the `locale' command.

What is the output of the psql '\encoding' command when you've just made
a new connection, and without touching the client_encoding setting?

If your terminal encoding is not utf-8 then setting client_encoding to
utf-8 in psql is incorrect. You should set it to your local 8 bit
encoding (say, latin-1) so psql knows to convert text between the DB
encoding and local encoding.

I'm a little surprised that psql doesn't set client_encoding by default
based on the LC_ALL / LANG environment variables, but it doesn't seem
to. I'm in a UTF-8 locale (en_AU.UTF-8) and even if I run psql as:

LANG=en_AU LC_ALL=en_AU psql

(en_AU is a LATIN-1 locale by default) it still selects a UTF-8 client
encoding. If I spawn a terminal in the en_AU locale and do this I get
mangled text from psql until I set client_encoding manually.

Shouldn't psql set client_encoding based on the user's locale?
>  If I
> try a manual UPDATE via psql using the data cut'n'pasted from my log,
> and then look at the data, it reads correctly.
That's not necessarily correct. If your terminal encoding is not UTF-8
then `less' is converting the UTF-8 encoded data in the log to your
locale encoding. Copying and pasting that from a terminal displaying it
with less will copy the /translated/ data, not the original UTF-8 data.

To perform a valid test you'd need to extract the raw byte sequence from
the log, encode it as an escaped octal/hex string in psql, and send that.
>  Therefore I know that
> psql is capable of reading and writing UTF8 data correctly.
It is, but I'm not sure that your tests show that. I think you're
silently inserting data in a different (non-UTF-8) encoding into the
database.

If you can compare the byte sequences of the value when you've just set
it from Java to when you've just set it from psql I could tell you for
sure. Casting the text to a bytea should do the job, eg running:

SELECT problem_column::bytea FROM ...

after you insert/update from java, and again after you insert/update
from psql.


Alternately, just insert the UTF-8 string you want as a bytea. Here's an
example string for you (UTF-8 encoded in my mail client):

SELECT 'äëïöüáéíóú©讓您搜尋和瀏覽最新資訊。'::bytea;

the string is aeiou (all with umlaut), aeiou (all with acute), copyright
symbol, then a bunch of simplified chinese characters from google news.
If it doesn't appear that way in your mail client then it's not
honouring the content encoding set in the message, and you need to
manually tell it to handle the message as utf-8.

The resulting string of UTF-8 encoded bytes is:


E'\303\244\303\253\303\257\303\266\303\274\303\241\303\251\303\255\303\263\303\272\302\251\350\256\223\346\202\250\346\220\234\345\260\213\345\222\214\347\200\217\350\246\275\346\234\200\346\226\260\350\263\207\350\250\212\343\200\202'

If you INSERT that string into a text/varchar column in the DB as an
escaped string as above, then SELECT it back as bytea,  you should get
the original sequence of octal values back. If you select it as text,
you should get the same value as the string shown above in my email. If
you insert the non-escaped string as written in my email and select that
back out as bytea, you should get the same sequence of octal values as
shown in my mail.
> Also, the
> client application that reads from my database is Perl, and this also
> retrieves mangled data; we've tried writing and reading directly from
> Perl, and in this case reviewing the data in psql looks normal.
>
What locale is your perl app running in?

Are you explicitly handling conversions from utf-8 byte strings from
psql to the local 8 bit encoding, or setting client_encoding to your
local 8 bit encoding to get the pg libraries to do it for you?
> The conclusion I've reached is that Java + JDBC is not actually
> persisting the data in UTF-8; is that correct or am I wildly off base,
> and if it is correct then is there anything I can do about it?!
>
It's actually rather a lot more likely that the rest of your tests are
inserting mangled data and then retrieving it by unmangling it the same
way, and the Java code is getting it right. I see this sort of thing a
lot when people are working with 8-bit text of different encodings, or
converting between "Unicode" datatypes and 8-bit strings that don't have
an inherent encoding.

--
Craig Ringer

Re: Character Encoding problem

From
Craig Ringer
Date:
antony baxter wrote:
> One thing I forgot to add; I also tried e.g.:
>
>   ps.setString(1, new
> String(Charset.forName("UTF-8").encode(myString).array(), "UTF-8"));
>
That should just be a no-op in encoding terms. Could the ByteBuffer
returned by encode(...) contain trailing NULL values that CharsetDecoder
retains in the decoded string?

Are the lengths of `myString' and the converted string the same?

If you print them out character by character what're the character values?

--
Craig Ringer

Re: Character Encoding problem

From
Craig Ringer
Date:
antony baxter wrote:

> Displaying 'input' character by character:
> Character 0 = '8211'
> Character 1 = '235'
> Character 2 = '8212'
> Character 3 = '196'
> Character 4 = '8212'
> Character 5 = '231'
> Character 6 = '8211'
> Character 7 = '937'
> Character 8 = '8212'
> Character 9 = '199'

There's your problem. Your *input* is mangled.

The above decodes to:

--e"---A"---c,--?---C,

So at some point you or some library you're using has done something
like read a utf-8 byte sequence from a file and shoved it character by
character into a String. Another possible culprit is a wrong (implicit?)
encoding conversion or cast from a byte array type to a unicode string type.

The JDBC is storing exactly what you tell it to, and the good 'ol GIGO
rule is being applied.

--
Craig Ringer

Re: Character Encoding problem

From
Craig Ringer
Date:
antony baxter wrote:
> Hi Craig - thanks for replying.
>
>>  What's your terminal encoding? The easiest way to find out is to look at
>> the output of the `locale' command.
>
> ant@home (/Users/ant) % locale
> LANG="en_GB.UTF-8"

Hmm, ok. I was expecting a non-utf-8 locale.

>
>>  What is the output of the psql '\encoding' command when you've just made a
>> new connection, and without touching the client_encoding setting?
>
> testdb=# \encoding
> UTF8

OK, so your psql client encoding matches your db locale and your local 8
bit text codec for your terminal etc.

>>  To perform a valid test you'd need to extract the raw byte sequence from
>> the log, encode it as an escaped octal/hex string in psql, and send that.
>
> Ok. I tried cutting and pasting just some cyrillic from my log file
> into 'test.txt' and then did:
>
> ant@home (/Users/ant) % file test.txt
> test.txt: UTF-8 Unicode text
>
> Which I assumed was enough?

Only because all your locales are the same. Copy and paste via a
terminal is potentially subject to text encoding conversions by tools
like `less' and `vim'.

A reliable test would be to use something like Perl or Python to open
the file (in plain old 8-bit binary mode, no encoding conversions etc),
seek to the appropriate part, and read the sequence of bytes of interest.

> Ok. Directly after the Java INSERT:
>
> testdb=# select name::bytea from testTable where id = '1';
>                                               first_name
> ------------------------------------------------------------------------------------------------------
>  \342\200\223\303\253\342\200\224\303\204\342\200\224\303\247\342\200\223\316\251\342\200\224\303\207
> (1 row)

That decodes as:

–ë—Ä—ç–Ω—Ç

ie emdash, e-umlaut, emdash, captial-a-umlaut, emdash, c-cedelia,
emdash, omega, emdash, capital-c-cedelia.

... which isn't exactly cyrillic ;-)

More to the point, the regularly inserted emdashes are very informative,
as they suggest that something is mangling the UTF-8 escape bytes.

> Then I manually UPDATE and try again:
>
> testdb=# select name::bytea from testTable where id = '1';
>                 first_name
> ------------------------------------------
>  \320\221\321\200\321\215\320\275\321\202

... which decodes as a much more sensible 'Брэнт'. At least it's cyrillic.


>>  If you INSERT that string into a text/varchar column in the DB as an
>> escaped string as above, then SELECT it back as bytea,  you should get the
>> original sequence of octal values back. If you select it as text, you should
>> get the same value as the string shown above in my email. If you insert the
>> non-escaped string as written in my email and select that back out as bytea,
>> you should get the same sequence of octal values as shown in my mail.
>
> Both appear correct - i.e. SELECT name shows me correctly accented +
> Chinese text, and SELECT name::bytea shows me the same bytes.

Yep, it's now clear that you're getting the right behaviour from psql.
Probably perl, too, since it's the same 8-bit encoding all the way through.

>>  Are you explicitly handling conversions from utf-8 byte strings from psql
>> to the local 8 bit encoding, or setting client_encoding to your local 8 bit
>> encoding to get the pg libraries to do it for you?
>
> Neither, as far as I'm aware. I'm running everything (I think) in UTF-8.

It's probably a good idea to set the client encoding explicitly to your
local encoding anyway, or do explicit utf-8 <> local8bit conversions on
data from the DB. Testing your perl code in a non-utf-8 locale

>>  It's actually rather a lot more likely that the rest of your tests are
>> inserting mangled data and then retrieving it by unmangling it the same way,
>> and the Java code is getting it right. I see this sort of thing a lot when
>> people are working with 8-bit text of different encodings, or converting
>> between "Unicode" datatypes and 8-bit strings that don't have an inherent
>> encoding.
>
> Yes - I think thats probably right, except that everything else
> *seems* to be functioning correctly...

Yep, my bad. So now we look at your strings in Java. Are you certain
that the input you're feeding to the JDBC is correctly encoded utf-8
text? I've frequently seen a "unicode" string like java.lang.String or
Qt's QString being filled character by character from 8-bit input,
resulting in a hopelessly mangled string for non-ascii input, especially
in locales where one character isn't necessarily one byte. Treating
utf-8 input as ascii or latin-1 (maybe due to an implicit conversion
somewhere) is another culprit.

--
Craig Ringer


Re: Character Encoding problem

From
Craig Ringer
Date:
Craig Ringer wrote:

> The above decodes to:
>
> --e"---A"---c,--?---C,

Argh, mail clients are evil.

Manually setting the encoding to utf-8 again, the output decodes to:

–ë—Ä—ç–Ω—Ç

Sorry.

The rest of the mail stands.

--
Craig Ringer