On 01/19/2012 04:12 PM, Robert Haas wrote:
> On Thu, Jan 19, 2012 at 4:07 PM, Andrew Dunstan<andrew@dunslane.net> wrote:
>> On 01/19/2012 03:49 PM, Robert Haas wrote:
>>> In other words, let's decree that when the database encoding isn't
>>> UTF-8, *escaping* of non-ASCII characters doesn't work. But
>>> *unescaped* non-ASCII characters should still work just fine.
>> The spec only allows unescaped Unicode chars (and for our purposes that
>> means UTF8). An unescaped non-ASCII character in, say, ISO-8859-1 will
>> result in something that's not legal JSON. See
>> <http://www.ietf.org/rfc/rfc4627.txt?number=4627> section 3.
> I understand. I'm proposing that we not care. In other words, if the
> server encoding is UTF-8, it'll really be JSON. But if the server
> encoding is something else, it'll be almost-JSON. And specifically,
> the \uXXXX syntax won't work, and there might be some non-Unicode
> characters in there. If that's not the behavior you want, then use
> UTF-8.
>
> It seems pretty clear that we're going to have to make some trade-off
> to handle non-UTF8 encodings, and I think what I'm suggesting is a lot
> less painful than disabling high-bit characters altogether. If we do
> that, then what happens if a user runs EXPLAIN (FORMAT JSON) and his
> column label has a non-Unicode character in there? Should we say, oh,
> sorry, you can't explain that in JSON format? That is mighty
> unfriendly, and probably mighty complicated and expensive to figure
> out, too. We *do not support* mixing encodings in the same database,
> and if we make it the job of this patch to fix that problem, we're
> going to be in the same place for 9.2 that we have been for the last
> several releases: nowhere.
OK, then we need to say that very clearly and up front (including in the
EXPLAIN docs.)
Of course, for data going to the client, if the client encoding is UTF8,
they should get legal JSON, regardless of what the database encoding is,
and conversely too, no?
cheers
andrew
>