Async processing of rows - Mailing list pgsql-interfaces

From Nat!
Subject Async processing of rows
Date
Msg-id 228B07DD-DE01-40CA-9385-D5D94DFAFF4A@mulle-kybernetik.com
Whole thread Raw
Responses Re: Async processing of rows  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-interfaces
Hi

I will be writing an EOF (http://en.wikipedia.org/wiki/Enterprise_Objects_Framework 
) adaptor for Postgres. Due to the way these are structured, I want to  
process the result data row by row and not in one big tuple array. I  
looked into the pg-library and it seems that this is possible, albeit  
not without adding something to the API.

PQgetResult seems to loop as long as PGASYNC_BUSY is set, and that  
appears to be set as long as there are rows being sent from the  
server. Correct ?

So I what I think I need to do is write a function PQgetNextResult  
that only blocks if there is not enough data available for reading in  
one row.

A cursory glance at pqParseInput3 shows, that I can't call it with  
incomplete input, as data is discarded even if the parse is  
incomplete, mainly, this piece of code discards 'id' if msgLength can  
not be completely read, which makes me wary:
    conn->inCursor = conn->inStart;    if (pqGetc(&id, conn))        return;    if (pqGetInt(&msgLength, 4, conn))    {
      /* (nat) expected to see: pqUngetc( id, conn); */                return;    }
 

So am I missing something or is this basically correct ?

Ciao   Nat!
----------------------------------------------
I'd like to fly
But my wings have been so denied   -- Cantrell



pgsql-interfaces by date:

Previous
From: Keary Suska
Date:
Subject: Re: FW: Libpq and the SQL RETURNING clause
Next
From: Tom Lane
Date:
Subject: Re: Async processing of rows