On Sun, Nov 20, 2005 at 11:29:39AM -0500, Tom Lane wrote:
> Martijn van Oosterhout <kleptog@svana.org> writes:
> > libpq supports it just fine. You do a PQsendQuery() and then as many
> > PQgetResult()s as it takes to get back the results. This worked for a
> > while AFAIK.
>
> That only works if the caller is prepared to read each result serially,
> and not (say) a row at a time in parallel. There are a bunch of
> ease-of-use problems as well, such as knowing which resultset is which,
> coping with errors detected after the first resultset(s) are sent, etc.
Urk! I don't think anyone is suggesting that resultsets can be
interleaved. Apart from being extremely unlike the current model in
PostgreSQL, I can't think of a use for it that isn't served just as
well by sending them sequentially.
> A more realistic way of dealing with multiple resultsets is to deliver
> them as named cursor references and allow the client to FETCH
> reasonable-sized chunks. We can sort of handle this today, but it's
> notationally painful at both the stored-procedure and client ends.
But if you run a function, it can only return a single row at a time.
Fiddling with cursors means you would have to queue up. After all, the
function is going to return the tuples in a fixed order that is
independant of when the client asks.
At the end of the day, the client can only accept data in the order the
server sends it. Having to request each row seem somewhat inefficient.
Have a nice day,
--
Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.