Okay. I have NO IDEA why this works. If someone could enlighten me as to the math involved I'd appreciate it. First,
alittle background:
The Euro symbol is unicode value 0x20AC. UTF-8 encoding is a way of representing most unicode characters in two bytes,
andmost latin characters in one byte.
The only way I have found to insert a euro symbol into the database from the command line psql client is this:
INSERT INTO mytable VALUES('\342\202\254');
I don't know why this works. In hex, those octal values are:
E2 82 AC
I don't know why my "20" byte turned into two bytes of E2 and 82. Furthermore, I was under the impression that a UTF-8
encodingof the Euro sign only took two bytes. Corroborating this assumption, upon dumping that table with pg_dump and
examiningthe resultant file in a hex editor, I see this in that character position: AC 20
Furthermore, according to the psql online documentation and man page:
"Anything contained in single quotes is furthermore subject to C-like substitutions for \n (new line), \t (tab),
\digits,\0digits, and \0xdigits (the character with the given decimal, octal, or hexadecimal code)."
Those digits *should* be interpreted as decimal digits, but they aren't. The man page for psql is either incorrect, or
theimplementation is buggy.
It's worth noting that the field I'm inserting into is an SQL_ASCII field, and I'm reading my UTF-8 string out of it
likethis, via JDBC:
String value = new String( resultset.getBytes(1), "UTF-8");
Can anyone help me make sense of this mumbo jumbo?
-Roland