Thread: String encoding during connection "handshake"
Hi all. I have read the documentation, searched the mailing lists and inspected the code JDBC driver code. I do need to address this question to actual developers. Simply put, what is the client encoding that the server assumes BEFORE the client connection is established, that is, during the authentication phase? I know there's a "client_encoding" setting on the server side that indicates the encoding used in the communication stream, but its default value is the database's encoding. Which is not known before the user gets authenticated and the "logical" connection is actually made. I'm asking this so that I can fix the JDBC driver that wrongly assumes that user name, password and database name are made up of ASCII characters only. This issue has come up before, but no action has been carried out. See http://archives.postgresql.org/pgsql-jdbc/2007-10/msg00128.php I also need a vital information regarding the MD5 hash that is computed is some authentication scenarios. This hash is based on char[] (String) values that must be converted into byte[] before being handed over to the hash algorithm. What is the encoding used by the server to make this conversion during the authentication phase (in order to verify the submitted password)? I also saw that the JDBC driver sends right away this pair to the server: {"client_encoding", "UNICODE"}. Does that mean that the client is requesting the server to interpret the communication stream as encoded in "UTF8"? Thanks.
On Tue, Nov 27, 2007 at 02:51:32PM +0200, sulfinu@gmail.com wrote: > Simply put, what is the client encoding that the server assumes BEFORE the > client connection is established, that is, during the authentication phase? I > know there's a "client_encoding" setting on the server side that indicates > the encoding used in the communication stream, but its default value is the > database's encoding. Which is not known before the user gets authenticated > and the "logical" connection is actually made. I was under the impression that the username/password, had no encoding, they are Just a Bunch of Bits, i.e. byte[]. Hence it is not relevent what encoding the database is, it depends what encoding the DB admin was using when the user was created. That solves your md5 problem. Looking at it another way, the encoding is part of the password. The correctly entered password in the wrong encoding is also wrong, because the matching is done at the byte level. So I suppose the answer is: whatever encoding you would like it to be/what the DB admin uses. This is all AIUI, Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > Those who make peaceful revolution impossible will make violent revolution inevitable. > -- John F Kennedy
On Tuesday 27 November 2007, Martijn van Oosterhout wrote: > I was under the impression that the username/password, had no encoding, > they are Just a Bunch of Bits, i.e. byte[]. I cannot agree to that, simply because Postgres supports (or at least claims to) multi-byte characters. And user names, passwords and database names are character strings. > Looking at it another way, the encoding is part of the password. The > correctly entered password in the wrong encoding is also wrong, because > the matching is done at the byte level. I'm afraid that is true to some extent, that's why I'm asking in the first place. A user should be able to authenticate as long as he/she is able to write the password, regardless of the OS's locale setting. > This is all AIUI, Thanks fot the input, I'm waiting for others, too. Or point me to the relevant source files.
From: pgsql-hackers-owner@postgresql.org on behalf of sulfinu@gmail.com
Sent: Tue 11/27/2007 8:55 PM
To: Martijn van Oosterhout
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] String encoding during connection "handshake"
On Tuesday 27 November 2007, Martijn van Oosterhout wrote:
> I was under the impression that the username/password, had no encoding,
> they are Just a Bunch of Bits, i.e. byte[].
I cannot agree to that, simply because Postgres supports (or at least claims
to) multi-byte characters. And user names, passwords and database names are
character strings.
> Looking at it another way, the encoding is part of the password. The
> correctly entered password in the wrong encoding is also wrong, because
> the matching is done at the byte level.
I'm afraid that is true to some extent, that's why I'm asking in the first
place. A user should be able to authenticate as long as he/she is able to
write the password, regardless of the OS's locale setting.
> This is all AIUI,
Thanks fot the input, I'm waiting for others, too. Or point me to the relevant
source files.
---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Ok, that's bad. I've also read crypt.c and md5.c. And what a nightmare is C compared to Java (granted, there's a difference in age of more than 20 years). My guess is that since the "char" type is one byte long, all "char *" expressions are actually pointers to array of bytes which are transmitted through the wire and/or stored in the database. When these strings arrive from or leave for a client that has another declared encoding than the database, the string of bytes is replaced with an equivalent one from a Unicode perspective. Am I right? During the authentication phase, no such conversion takes place - you were right and I couldn't believe it! In the case when your database name, your user name or password contain non-ASCII characters, you're out of luck if the stored values were submitted in another encoding by the administrator. I assume that no names conversion takes place between client and cluster metadata when a role is created (CREATE ROLE... PASSWORD...) or when a database is created (CREATE DATABASE...). Or does it? In that case, the names are encoded in the encoding of the database that the administrator was connected to. Thank you both. On Tuesday 27 November 2007, Usama Munir wrote: > Martin is actually right. No assumption is made about the encoding of the > password. The password is recieved as a set of bytes over the wire-level > protocol and then processed accordingly as per your pg_hba settings. please > refer to auth.c method recv_password_packet(Port *port). The comment on the > last line of the method might be of your intrest, and i quote > > "Return the received string, Note we do not attempt to do any character set > conversion on it; since we don't know the client's encoding, there woudn't > be much point" > > / Usama
On Wed, Nov 28, 2007 at 11:39:33AM +0200, sulfinu@gmail.com wrote: > During the authentication phase, no such conversion takes place - you were > right and I couldn't believe it! In the case when your database name, your > user name or password contain non-ASCII characters, you're out of luck if the > stored values were submitted in another encoding by the administrator. The problem is, what conversion. You don't know the encoding of the server yet (because you havn't selected a DB) and you don't know the encoding to the client. The only real possibility is to declare One True Encoding and decree every username/password be in that. But you're never going to get people to agree on that. > I assume that no names conversion takes place between client and cluster > metadata when a role is created (CREATE ROLE... PASSWORD...) or when a > database is created (CREATE DATABASE...). Or does it? In that case, the names > are encoded in the encoding of the database that the administrator was > connected to. Honestly, UNIX usernames/passwords have always worked like this so we're not really doing anything wierd by doing it this way. Users need to type the password in the same encoding it was added. It not usually a big deal because people set their own passwords... Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > Those who make peaceful revolution impossible will make violent revolution inevitable. > -- John F Kennedy
Martijn, :) don't take it personal, I am just trying to obtain confirmation that I understood well the problem. Afterall, it's just that C has a very outdated notion of "char"s (and no notion of Unicode). I was naively under the impression that "char"s have evolved in nowadays C. Regarding the problem of "One True Encoding", the answer seems obvious to me: use only one encoding per database cluster, either UTF-8 or UTF-16 or another Unicode-aware scheme, whichever yields a statistically smaller database for the languages employed by the users in their data. This encoding should be a one time choice! De facto, this is already happening now, because one cannot change collation rules after a cluster has been created. During the handshake, all clients should be assumed to serve data in the cluster's encoding. Have a nice day, too. On Wednesday 28 November 2007, Martijn van Oosterhout wrote: > On Wed, Nov 28, 2007 at 11:39:33AM +0200, sulfinu@gmail.com wrote: > > During the authentication phase, no such conversion takes place - you > > were right and I couldn't believe it! In the case when your database > > name, your user name or password contain non-ASCII characters, you're out > > of luck if the stored values were submitted in another encoding by the > > administrator. > > The problem is, what conversion. You don't know the encoding of the > server yet (because you havn't selected a DB) and you don't know the > encoding to the client. The only real possibility is to declare One > True Encoding and decree every username/password be in that. But you're > never going to get people to agree on that. > > > I assume that no names conversion takes place between client and cluster > > metadata when a role is created (CREATE ROLE... PASSWORD...) or when a > > database is created (CREATE DATABASE...). Or does it? In that case, the > > names are encoded in the encoding of the database that the administrator > > was connected to. > > Honestly, UNIX usernames/passwords have always worked like this so > we're not really doing anything wierd by doing it this way. Users need > to type the password in the same encoding it was added. It not usually > a big deal because people set their own passwords... > > Have a nice day,
sulfinu@gmail.com escribió: > Martijn, > > :) don't take it personal, I am just trying to obtain confirmation that I > understood well the problem. Afterall, it's just that C has a very outdated > notion of "char"s (and no notion of Unicode). I was naively under the > impression that "char"s have evolved in nowadays C. This is not the language's fault in any way. We support plenty of encodings beyond UTF-8. -- Alvaro Herrera http://www.amazon.com/gp/registry/5ZYLFMCVHXC "La grandeza es una experiencia transitoria. Nunca es consistente. Depende en gran parte de la imaginación humana creadora de mitos" (Irulan)
On Wed, Nov 28, 2007 at 05:54:05PM +0200, sulfinu@gmail.com wrote: > Regarding the problem of "One True Encoding", the answer seems obvious to me: > use only one encoding per database cluster, either UTF-8 or UTF-16 or another > Unicode-aware scheme, whichever yields a statistically smaller database for > the languages employed by the users in their data. This encoding should be a > one time choice! De facto, this is already happening now, because one cannot > change collation rules after a cluster has been created. Umm, each database in a cluster can have a different encoding, so there is no such thing as the "cluster's encoding". You can certainly argue that it should be a one time choice, but I doubt you'll get people to remove the possibilites we have now. If fact, if anything we'd probably go the otherway, allow you to select the collation on a per database/table/column level (SQL complaince requires this). This has nothing to do with C by the way. C has many features that allow you to work with different encodings. It just doesn't force you to use any particular one. Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > Those who make peaceful revolution impossible will make violent revolution inevitable. > -- John F Kennedy
On 11/28/07, Martijn van Oosterhout <kleptog@svana.org> wrote: > On Wed, Nov 28, 2007 at 05:54:05PM +0200, sulfinu@gmail.com wrote: > > Regarding the problem of "One True Encoding", the answer seems obvious to me: > > use only one encoding per database cluster, either UTF-8 or UTF-16 or another > > Unicode-aware scheme, whichever yields a statistically smaller database for > > the languages employed by the users in their data. This encoding should be a > > one time choice! De facto, this is already happening now, because one cannot > > change collation rules after a cluster has been created. > Umm, each database in a cluster can have a different encoding, so there > is no such thing as the "cluster's encoding". You can certainly argue > that it should be a one time choice, but I doubt you'll get people to > remove the possibilites we have now. If fact, if anything we'd probably > go the otherway, allow you to select the collation on a per > database/table/column level (SQL complaince requires this). To be clear, what sulfinu is really advocating is convergence on Unicode period, which is the direction most international projects are moving, when they can. PostgreSQL's problem is that it (and AFAICT POSIX) conflates encoding with locale, when the two are entirely separate concepts. I'm not entirely sure how that's supposed to solve the client authentication issue though. Demanding that clients present auth data in UTF-8 is no different than demanding they present it in the encoding it was entered in originally...
On Wednesday 28 November 2007, Trevor Talbot wrote: > I'm not entirely sure how that's supposed to solve the client > authentication issue though. Demanding that clients present auth data > in UTF-8 is no different than demanding they present it in the > encoding it was entered in originally... Oh no, it's a big difference: PREDICTABILITY! Why must I guess the encoding used by the administrator? What if he's Chinese? Instead, I know the cluster's encoding, just as I know the server name and the TCP port. And the connection handshake carries on without misunderstandings (read wrong encoding).
On Wednesday 28 November 2007, Alvaro Herrera wrote: > sulfinu@gmail.com escribió: > > Martijn, > > > > :) don't take it personal, I am just trying to obtain confirmation that I > > > > understood well the problem. Afterall, it's just that C has a very > > outdated notion of "char"s (and no notion of Unicode). I was naively > > under the impression that "char"s have evolved in nowadays C. > > This is not the language's fault in any way. We support plenty of > encodings beyond UTF-8. Yes, you support (and worry about) encodings simply because of a C limitation dating from 1974, if I recall correctly... In Java, for example, a "char" is a very well defined datum, namely a Unicode point. While in C it can be some char or another (or an error!) depending on what encoding was used. The only definition that stands up is that a "char" is a byte. Its interpretation is unsure and unsafe (see my original problem). On Wednesday 28 November 2007, Martijn van Oosterhout wrote: > On Wed, Nov 28, 2007 at 05:54:05PM +0200, sulfinu@gmail.com wrote: > > Regarding the problem of "One True Encoding", the answer seems obvious to > > me: use only one encoding per database cluster, either UTF-8 or UTF-16 or > > another Unicode-aware scheme, whichever yields a statistically smaller > > database for the languages employed by the users in their data. This > > encoding should be a one time choice! De facto, this is already happening > > now, because one cannot change collation rules after a cluster has been > > created. > > Umm, each database in a cluster can have a different encoding, so there > is no such thing as the "cluster's encoding". I implied that a cluster should have a single encoding that covers the whole Unicode set. That would certainly satisfy everybody. > You can certainly argue > that it should be a one time choice, but I doubt you'll get people to > remove the possibilites we have now. If fact, if anything we'd probably > go the otherway, allow you to select the collation on a per > database/table/column level (SQL complaince requires this). The collation order is implemented in close relationship with the byte representation of strings, but conceptually depends on the locale solely and has nothing to do with the encoding. > This has nothing to do with C by the way. C has many features that > allow you to work with different encodings. It just doesn't force you > to use any particular one. Yes, my point exactly! C forces you to worry about encoding. I mean, if you're not an ASCII-only user ;) Think of it this way: if I give you a Java String you will perfectly know what I meant; if I send you a C char* you don't know what it is in the absence of extra information - you can even use it as a uint8*, as it is actually done in md5.c. I consider this matter closed from my point of view and I have modified the JDBC driver according to my needs. Thank you all for the help.
On 11/28/07, sulfinu@gmail.com <sulfinu@gmail.com> wrote: > On Wednesday 28 November 2007, Trevor Talbot wrote: > > I'm not entirely sure how that's supposed to solve the client > > authentication issue though. Demanding that clients present auth data > > in UTF-8 is no different than demanding they present it in the > > encoding it was entered in originally... > Oh no, it's a big difference: PREDICTABILITY! > Why must I guess the encoding used by the administrator? What if he's Chinese? > Instead, I know the cluster's encoding, just as I know the server name and > the TCP port. And the connection handshake carries on without > misunderstandings (read wrong encoding). What if the user and client program is Chinese too? Not everything is developed in an environment where UTF-8 support is easily available. Either way, it is a demand on the client, and not necessarily a simple one.
<sulfinu@gmail.com> writes: > Yes, you support (and worry about) encodings simply because of a C limitation > dating from 1974, if I recall correctly... > In Java, for example, a "char" is a very well defined datum, namely a Unicode > point. While in C it can be some char or another (or an error!) depending on > what encoding was used. No, you're being confused by C's idiosyncratic terminology. "char" in C just means 1-byte integral data type. If you want to store a unicode code point you use a different data type. Incidentally I'm not sure but I don't think it's true that "char" in Java stores a unicode code point. I thought Java used UTF16 internally for strings and strings stored arrays of chars. In which case "char" in Java stores two bytes of a UTF16 encoded string which is pretty analogous to storing UTF8 encoded strings in C where each "char" stores one byte of a UTF8 encoded string. > Think of it this way: if I give you a Java String you will perfectly know what > I meant; if I send you a C char* you don't know what it is in the absence of > extra information - you can even use it as a uint8*, as it is actually done > in md5.c. That's because you're comparing apples to oranges. In C you don't even know if a char* is a string at all. It's a pointer to some bytes and those could contain anything. And think about what happens in Java if you have to deal with UTF8 encoded strings or Big5 encoded strings. They aren't "strings" in the Java object hierarchy so when someone passes you a "MyString" you have the same problems of needing to know what encoding was used. Presumably you would put that in a member variable of the MyString class but that just goes to how the data structures in C are laid out and what you're considering "extra information". -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's Slony Replication support!
On 11/28/07, sulfinu@gmail.com <sulfinu@gmail.com> wrote: > Yes, you support (and worry about) encodings simply because of a C limitation > dating from 1974, if I recall correctly... > In Java, for example, a "char" is a very well defined datum, namely a Unicode > point. While in C it can be some char or another (or an error!) depending on > what encoding was used. The only definition that stands up is that a "char" > is a byte. Its interpretation is unsure and unsafe (see my original problem). It's not really that simple. Java, for instance, does not actually support Unicode characters / codepoints at the base level; it merely deals in UTF-16 code units. (The critical difference is in surrogate pairs.) You're still stuck dealing with a specific encoding even in many modern languages. PostgreSQL's encoding support is not just about languages though, it's also about client convenience. It could simply choose a single encoding and parrot data to and from the client, but it also does on-the-fly conversion when a client requests it. It's a very useful feature, and many mature networked applications support similar things. An easy example is the World Wide Web itself. > I implied that a cluster should have a single encoding that covers the whole > Unicode set. That would certainly satisfy everybody. Note that it might not. Unicode does not encode *every* character, and in some cases there is no round-trip mapping between it and other character sets. The result could be a loss of semantic data. I suspect it actually would satisfy everyone in PostgreSQL's case, but it's not something you can assume without checking. > > This has nothing to do with C by the way. C has many features that > > allow you to work with different encodings. It just doesn't force you > > to use any particular one. > Yes, my point exactly! C forces you to worry about encoding. I mean, if you're > not an ASCII-only user ;) For a networked application, you're stuck worrying about the encoding regardless of language. UTF-8 is the most common Internet transport, for instance, but that's not the native internal encoding used by Java and most other Unicode processing platforms to date. That's fairly simple since it's still only a single character set, but if your application domain predates Unicode, you can't avoid dealing with the legacy encodings at some level anyway. As I implied earlier, I do think it would be worthwhile for PostgreSQL to move toward handling it better, so I'm not saying this is a bad idea. It's just that it's a much more complex topic than it might seem at first glance. I'm glad you got something working for you.
On Wed, 28 Nov 2007, sulfinu@gmail.com wrote: > I consider this matter closed from my point of view and I have modified the > JDBC driver according to my needs. > Could you explain in more detail what you've done to the JDBC driver in case it is generally useful or other people have the same problem? Kris Jurka
On Wed, 2007-11-28 at 09:38 -0800, Trevor Talbot wrote: > PostgreSQL's problem is that it (and AFAICT POSIX) conflates encoding > with locale, when the two are entirely separate concepts. In what way does PostgreSQL conflate encoding with locale? -Neil