Re: [HACKERS] Re: [INTERFACES] retrieving varchar size - Mailing list pgsql-interfaces
From | Tom Lane |
---|---|
Subject | Re: [HACKERS] Re: [INTERFACES] retrieving varchar size |
Date | |
Msg-id | 1360.893632408@sss.pgh.pa.us Whole thread Raw |
In response to | Re: [HACKERS] Re: [INTERFACES] retrieving varchar size (Bruce Momjian <maillist@candle.pha.pa.us>) |
Responses |
Re: [HACKERS] Re: [INTERFACES] retrieving varchar size
Re: [HACKERS] Re: [INTERFACES] retrieving varchar size |
List | pgsql-interfaces |
Bruce Momjian <maillist@candle.pha.pa.us> writes: > My idea is to make a PQexecv() just like PQexec, except it returns an > array of results, with the end of the array terminated with a NULL, > [ as opposed to my idea of returning PGresults one at a time ] Hmm. I think the one-at-a-time approach is probably better, mainly because it doesn't require libpq to have generated all the PGresult objects before it can return the first one. Here is an example in which the array approach doesn't work very well: QUERY: copy stdin to relation ; select * from relation What we want is for the application to receive a PGRES_COPY_IN result, perform the data transfer, call PQendcopy, and then receive a PGresult for the select. I don't see any way to make this work if the library has to give back an array of results right off the bat. With the other method, PQendcopy will read the select command's output and stuff it into the (hidden) result queue. Then when the application calls PQnextResult, presto, there it is. Correct logic for an application that submits multi- command query strings would be something like result = PQexec(conn, query); while (result) { switch (PQresultStatus(result)) { ... case PGRES_COPY_IN: // ... copy data here ... if (PQendcopy(conn)) reportError(); break; ... } PQclear(result); result = PQnextResult(conn); } Another thought: we might consider making PQexec return as soon as it's received the first query result, thereby allowing the frontend to overlap its processing of this result with the backend's processing of the rest of the query string. Then, PQnextResult would actually read a new result (or the "I'm done" message), rather than just return a result that had already been stored. I wasn't originally thinking of implementing it that way, but it seems like a mighty attractive idea. No way to do it if we return results as an array. >> I'd really like to see is PQendcopy returning a PGresult that indicates >> success or failure of the copy, and then additional results could be >> queued up behind that for retrieval with PQnextResult. > Not sure on this one. If we change the API, we have to have a good > reason to do it. API additions are OK. Well, we can settle for having PQendcopy return 0 or 1 as it does now. It's not quite as clean as having it return a real PGresult, but it's probably not worth breaking existing apps just to improve the consistency of the API. It'd still be possible to queue up subsequent commands' results (if any) in the result queue. >> 2. Copy In and Copy Out data ought to be part of the protocol, that >> is every line of copy in/out data ought to be prefixed with a message >> type code. Fixing this might be more trouble than its worth however, >> if there are any applications that don't go through PQgetline/PQputline. > Again, if we clearly document the change, we are far enough from 6.4 > that perl and other people will handle the change by the time 6.4 is > released. Changes the affect user apps is more difficult. I have mixed feelings about this particular item. It would make the protocol more robust, but it's not clear that the gain is worth the risk of breaking any existing apps. I'm willing to drop it if no one else is excited about it. regards, tom lane
pgsql-interfaces by date: