Re: [HACKERS] libpq - Mailing list pgsql-hackers
From | Tom Lane |
---|---|
Subject | Re: [HACKERS] libpq |
Date | |
Msg-id | 18587.950249454@sss.pgh.pa.us Whole thread Raw |
In response to | Re: [HACKERS] libpq (Chris Bitmead <chrisb@nimrod.itg.telstra.com.au>) |
List | pgsql-hackers |
Chris Bitmead <chrisb@nimrod.itg.telstra.com.au> writes: > Tom Lane wrote: >> OK, but how does this interact with asynchrononous retrieval? It >> should be possible to run it in a nonblocking (select-waiting) mode. > I didn't know that was a requirement. Well, there may not be anyone holding a gun to your head about it... but there have been a number of people sweating to make the existing facilities of libpq usable in a non-blocking fashion. Seems to me that that sort of app would be particularly likely to want to make use of a streaming API --- so if you don't think about it, there is going to be someone else coming along to clean up after you pretty soon. Better to get it right the first time. > to wait for, so the only way is to have PQfileDescriptor or something, > but I don't think that affects these decisions does it? If they want > async, they are given the fd and select. When ready they call > nexttuple. Not really. The app can and does wait for select() to show read ready on libpq's input socket --- but that only indicates that there is a TCP packet's worth of data available, *not* that a whole tuple is available. libpq must provide the ability to consume data from the kernel (to clear the select-read-ready condition) and then either hand back a completed tuple (or several) or say "sorry, no complete data yet". I'd suggest understanding the existing facilities more carefully before you set out to improve on them. >> to a variant of PQexec: the limit says "return no more than N tuples >> per PQresult". > As in changing the interface to PQexec? I did say "variant", no? We don't get to break existing callers of PQexec. > I can't see the benefit of specifically asking for N tuples. Presumably > behind the scenes it will read from the socket in a respectably > large chunk (8k for example). Beyond that I can't see any more reason > for customisation. Well, that's true from one point of view, but I think it's just libpq's point of view. The application programmer is fairly likely to have specific knowledge of the size of tuple he's fetching, and maybe even to have a global perspective that lets him decide he doesn't really *want* to deal with retrieved tuples on a packet-by-packet basis. Maybe waiting till he's got 100K of data is just right for his app. But I can also believe that the app programmer doesn't want to commit to a particular tuple size any more than libpq does. Do you have a better proposal for an API that doesn't commit any decisions about how many tuples to fetch at once? >> not clear that it's worth creating cross-version compatibility problems >> to fix it. I'm inclined to leave it alone until such time as we >> undertake a really massive protocol change (moving to CORBA, say). > I'll look at that situation further later. Is there a policy on > protocol compatibility? If so, one way or both ways? The general policy so far has been that backends should be able to talk to any vintage of frontend, but frontend clients need only be able to talk to backends of same or later version. (The idea is to be able to upgrade your server without breaking existing clients, and then you can go around and update client apps at your convenience.) The last time we actually changed the protocol was in 6.4 (at my instigation BTW) --- and while we didn't get a tidal wave of "hey my new psql won't talk to an old server" complaints, we got a pretty fair number of 'em. So I'm very hesitant to break either forwards or backwards compatibility in new releases. I certainly don't want to do it just for code beautification; we need a reason that is compelling to the end users who will be inconvenienced. regards, tom lane
pgsql-hackers by date: