Re: libpq custom row processing - Mailing list psycopg

From Marko Kreen
Subject Re: libpq custom row processing
Date
Msg-id CACMqXCJK=858d49YwDZJ2naKwRMLRyBRTOmSr5-5+uee2gLnrQ@mail.gmail.com
Whole thread Raw
In response to Re: libpq custom row processing  (Federico Di Gregorio <fog@dndg.it>)
List psycopg
On Tue, Aug 7, 2012 at 4:25 PM, Federico Di Gregorio <fog@dndg.it> wrote:
> On 07/08/12 15:14, Marko Kreen wrote:
>> My point is that the behavior is not something completely new,
>> that no-one has seen before.
>>
>> But it's different indeed from libpq default, so it's not something
>> psycopg can convert to using unconditionally.  But as optional feature
>> it should be quite useful.
>
> I agree. As an opt-in feature would be quite useful for large datasets
> but then, named cursors already cover that ground. Not that I am against
> it, just I'd like to see why:
>
> curs = conn.cursor(row_by_row=True)
>
> would be better than:
>
> curs = conn.cursor("row_by_row")
>
> Is row by row faster than fetching from a named cursor? Does it add less
> overhead. If that's the case then would be nice to have it as a feature
> for optimizing queries returning large datasets.

It avoids network rondtrips while buffering minimal amount
of data.  Roundtrips may not be noticeable in single colo,
but are definitely troublesome when working between different colos.

It also saves CPU and memory on both server and client,
(less cache usage, less context switches)
but that gets into micro-optimization world so it is harder
to measure.

--
marko

psycopg by date:

Previous
From: Magnus Hagander
Date:
Subject: Re: libpq custom row processing
Next
From: "Hassan, Aydin"
Date:
Subject: Problems using psycopg2 listen features