Tatsuo Ishii writes:
> However, I think we should focus on more fundamental issues
> than those trivial ones. Recently Thomas gave an idea how to deal with
> the internationalization (I18N) of PostgreSQL: create character set
> etc.
I haven't actually seen any real implementation proposal yet. We all know
(I suppose) the requirements of this project and the interface is mostly
specified by SQL. But what I haven't seen yet is how this will work
internally. If we encode the charset into the header of the text datum
then each and every function will have to be concerned that its output
value has the right character set. If we use the type system and create a
new text type for each character set then we'll probably have to implement
N^X (where N is the number of character sets, and X is not known yet but
>1) functions, operators, casts, etc. (not even thinking about
user-pluggable character sets) and we'll really uglify all the psql \d and
pg_dump work. It's not at all clear. What I'm thinking these days is
that we'd need something completely new and unprecedented -- a separate
charset mix-and-match subsystem, similar to the type system, but
different. Not a pretty outlook, of course.
--
Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter