On Tue, Aug 7, 2012 at 2:46 PM, Federico Di Gregorio <fog@dndg.it> wrote:
> On 07/08/12 13:41, Marko Kreen wrote:
>> Same thing happens when you fetch the result in transaction
>> and later query fails with error thus invalidating earlier
>> processing. So nothing new.
>>
>> Or how about FETCH 100 from cursor in transaction,
>> and first few succeed and later one fails.
>>
>> It's up to user code to handle such cases correctly
>> and "correct" here depends on actual business logic
>> of the transaction.
>>
>> The warning is there because there is now new
>> failure scenario, but not because the failure
>> needs any kind of special handling.
>
> I don't agree. Simple code like:
>
> curs.execute("SELECT * FROM xs")
> for x in curs.fetchall():
> # do something like writing to the file system with x
>
> will have a different effect if row-by-row processing is enabled. Before
> nothing would be changed on the file system in case of error: the
> fetchall() is "atomic"; while now you write to the file system until the
> row that causes the error is processed.
When in transaction, then best analogy is reading
from cursor with FETCH.
When outside, in autocommit mode, then COPY (SELECT ..)
which you then process line-by-line.
My point is that the behavior is not something completely new,
that no-one has seen before.
But it's different indeed from libpq default, so it's not something
psycopg can convert to using unconditionally. But as optional feature
it should be quite useful.
Note - we are talking about libpq world here, Npgsql uses
such mode by default, maybe pgjdbc does too.
--
marko