Thread: Re: libpq, PQdescribePrepared -> PQftype, PQfmod, no PQnullable
Florian Pflug wrote: > Coming up with a reasonable algorithm isn't *that* hard. Agreed. Our shop has used a home-grown framework for over a decade where we parse queries using ANTLR ( http://www.antlr.org/ ) and we tracked this trough all expressions. There really weren't that many situations where we had to punt. > D) All others are nullable I think you meant "All others are not nullable." > As I see it, the hardest part of this feature is getting the > information to the client. Ay, there's the rub. > I don't think the reply to a DESCRIBE message is currently > extensible, so we'd probably need to add a new version of the > message. Or a new protocol version. I've been thinking that the next *big* project I look at here might be a new version of the protocol, since I see mentions of protocol limitations preventing things people want with some regularity. We should be keeping a list, and this should be on it. > That might be a rather tough sell, as least as long as there's > isn't a clear use-case for this. Which, unfortunately, nobody has > provided so far. Yeah. It would be nice to see at least one use case. The only comment I recall is a vague suggestion that that people might want to select data from a table and infer table attributes from the result set metadata. That seems marginal. > the question is simply whether one values to feature enough to put > in the word. ... or fund the work. There are people for hire in the community. > I certainly won't, because I don't really see the benefit. Yeah, it wouldn't be hard to produce a long list of things which would take about the same effort which seem more beneficial to me. It's a matter of whether this is causing someone enough bother to want to devote the resources to changing it. I really think it's that simple. -Kevin
On Oct9, 2011, at 14:20 , Kevin Grittner wrote: > Florian Pflug wrote: > >> Coming up with a reasonable algorithm isn't *that* hard. > > Agreed. Our shop has used a home-grown framework for over a decade > where we parse queries using ANTLR ( http://www.antlr.org/ ) and we > tracked this trough all expressions. There really weren't that many > situations where we had to punt. Sounds cool. What was your use-case for doing that? >> D) All others are nullable > > I think you meant "All others are not nullable." Ups, yeah, right, that was supposed to read *non*-nullable. >> That might be a rather tough sell, as least as long as there's >> isn't a clear use-case for this. Which, unfortunately, nobody has >> provided so far. > > Yeah. It would be nice to see at least one use case. The only > comment I recall is a vague suggestion that that people might want to > select data from a table and infer table attributes from the result > set metadata. That seems marginal. Well, there is one other, namely SQL standards compliance. It does mandate that "CREATE TABLE ... AS SELECT" creates NOT NULL constraints on non-nullable columns I think (I didn't re-check, though). I'm not sure I see the value in that either, but, hey, standards compliance ought to be a value it in itself, right? >> the question is simply whether one values to feature enough to put >> in the word. > > ... or fund the work. There are people for hire in the community. And that was, of course, supposed to read "put in the *work*". Alas, just putting in the *word* is probably not going to be enough ;-) best regards, Florian Pflug
"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes: > Florian Pflug wrote: >> I don't think the reply to a DESCRIBE message is currently >> extensible, so we'd probably need to add a new version of the >> message. > Or a new protocol version. Exactly --- this *would* require a protocol version bump. >> That might be a rather tough sell, as least as long as there's >> isn't a clear use-case for this. Which, unfortunately, nobody has >> provided so far. > Yeah. It would be nice to see at least one use case. The only > comment I recall is a vague suggestion that that people might want to > select data from a table and infer table attributes from the result > set metadata. That seems marginal. Yes. We need a pretty convincing use-case to seriously consider such a thing. > Yeah, it wouldn't be hard to produce a long list of things which > would take about the same effort which seem more beneficial to me. > It's a matter of whether this is causing someone enough bother to > want to devote the resources to changing it. The problem with something like a protocol bump is that the coding required to make it happen (in the backend and libpq, that is) is only a small part of the total distributed cost. So even if someone stepped up with a patch, it'd likely get rejected outright, unless there's significant community buy-in to the need for it. I agree with Kevin's comment that the right thing to be doing now would be to be keeping a list of things we might want to change the protocol for. It's just about certain that no single element on that list will be sufficient reason to change, but once there are enough of them maybe we'll have critical mass to do them all together. (Actually, isn't there such a page on the wiki already? Or a subsection of the TODO list?) regards, tom lane
Florian Pflug <fgp@phlo.org> writes: > On Oct9, 2011, at 14:20 , Kevin Grittner wrote: >> Yeah. It would be nice to see at least one use case. The only >> comment I recall is a vague suggestion that that people might want to >> select data from a table and infer table attributes from the result >> set metadata. That seems marginal. > Well, there is one other, namely SQL standards compliance. It does > mandate that "CREATE TABLE ... AS SELECT" creates NOT NULL constraints > on non-nullable columns I think (I didn't re-check, though). I'm not sure > I see the value in that either, but, hey, standards compliance ought > to be a value it in itself, right? Um ... but that case has nothing to do with protocol changes. regards, tom lane
On Oct9, 2011, at 17:56 , Tom Lane wrote: > Florian Pflug <fgp@phlo.org> writes: >> On Oct9, 2011, at 14:20 , Kevin Grittner wrote: >>> Yeah. It would be nice to see at least one use case. The only >>> comment I recall is a vague suggestion that that people might want to >>> select data from a table and infer table attributes from the result >>> set metadata. That seems marginal. > >> Well, there is one other, namely SQL standards compliance. It does >> mandate that "CREATE TABLE ... AS SELECT" creates NOT NULL constraints >> on non-nullable columns I think (I didn't re-check, though). I'm not sure >> I see the value in that either, but, hey, standards compliance ought >> to be a value it in itself, right? > > Um ... but that case has nothing to do with protocol changes. No, that was meant as a use-case for the deduction of nullability, not for it's transmission to the client. While those are obviously two distinct things, I figured we'd probably tackle them at the same time (if ever). It'd be strange to infer NOT NULL constraints for CREATE TABLE ... AS SELECT, yet provide no way for clients to obtain that information for simple SELECT statements. And you're right, the Wiki already contains a wish list for the next protocol version, and that wish list includes an entry for extending Describe to report the nullability of columns. The entry, BTW, was added by one Tom Lane ;-) The wish list can be found on http://wiki.postgresql.org/wiki/Todo under "Wire Protocol Changes". The referenced thread on -hackers includes a rather interesting use-case. The idea presented there is to infer the type of a statement's result columns at application compile-time, and inject the result into the compiler's type checking and deduction algorithm. Since most statically types languages don't have a general concept of "undefined" (i.e., there's no special "undefined" value included in the domain of every type), there's a lot of value in knowing that a columns cannot be null. It allows you to map the column directly to a string, int or whatever on the client side, instead of going through some intermediate type which adds "undefined" to the list of possible values. (That "intermediate type" is the "Maybe" monad in Haskell, in C++ it'd be boost::optional or something similar) best regards, Florian Pflug
On Sun, Oct 9, 2011 at 17:51, Tom Lane <tgl@sss.pgh.pa.us> wrote: > "Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes: >> Yeah, it wouldn't be hard to produce a long list of things which >> would take about the same effort which seem more beneficial to me. >> It's a matter of whether this is causing someone enough bother to >> want to devote the resources to changing it. > > The problem with something like a protocol bump is that the coding > required to make it happen (in the backend and libpq, that is) is only a > small part of the total distributed cost. So even if someone stepped up > with a patch, it'd likely get rejected outright, unless there's > significant community buy-in to the need for it. > > I agree with Kevin's comment that the right thing to be doing now would > be to be keeping a list of things we might want to change the protocol > for. It's just about certain that no single element on that list will > be sufficient reason to change, but once there are enough of them maybe > we'll have critical mass to do them all together. > > (Actually, isn't there such a page on the wiki already? Or a subsection > of the TODO list?) There is. Currently section 27.3 (seems not to have an anchor to link, and might change numbers when other things change, but that's what it's called now). Heading "wire protocol changes". And I think this is on there already? -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/
On sön, 2011-10-09 at 11:51 -0400, Tom Lane wrote: > The problem with something like a protocol bump is that the coding > required to make it happen (in the backend and libpq, that is) is only > a small part of the total distributed cost. Why do we have major and minor protocol version numbers, which are supposed to allow incremental addition of features to the protocol? What other costs do you have in mind?
Peter Eisentraut <peter_e@gmx.net> writes: > On sön, 2011-10-09 at 11:51 -0400, Tom Lane wrote: >> The problem with something like a protocol bump is that the coding >> required to make it happen (in the backend and libpq, that is) is only >> a small part of the total distributed cost. > Why do we have major and minor protocol version numbers, which are > supposed to allow incremental addition of features to the protocol? Well, that's a good question. I seem to recall that the last time it was discussed, questions were raised about whether a minor-number version bump would really work as desired. In particular, if the client connects asking for 3.1 and the server doesn't know anything later than 3.0, you end up having to do another connection cycle, which is rather inefficient and has got unpleasant failure cases too. This could be avoided if there were a way to have the server allow the connection but only at 3.0 level, but (1) there is no way to report that in 3.0 protocol, and (2) requiring clients to support 3.0 as well as 3.1 could be burdensome. Basically, it's uncharted territory, because we've never actually done it before. It wouldn't be a bad idea to put "make sure upgrading to a 4.1 protocol version will actually work smoothly" into our list of goals for protocol 4.0 ... regards, tom lane