Re: savepoint - Mailing list pgsql-odbc

From Marko Ristola
Subject Re: savepoint
Date
Msg-id 431DC997.7080307@kolumbus.fi
Whole thread Raw
In response to Re: savepoint  ("Hiroshi Saito" <saito@inetrt.skcapi.co.jp>)
List pgsql-odbc
Hiroshi Saito wrote:

>
>The above works with psqlodbc. However, Updating cursor is not considered carefully.
>
>Regards,
>Hiroshi Saito
>
>
>
Thats good that it works with your scenario.
My own feeling about my example was, that the exact case I described
wasn't very good.

I did find the cursor problem during this summer, when I played with
savepoints.
That was before the libpq was put into the CVS.
I tried then hard to understand the whole problem, not just some cases
of it.
I tried to make a robust implementation.

The problem case was and is as follows:
1. An ODBC statement handle holds an internal cursor.
2. That cursor has been rolled back by the backend via calling ROLLBACK
TO SAVEPOINT
    and the transaction is under READY state.
    Or then the transaction is under FAILED state, and the backend has
removed the cursor, or
    will remove the cursor during ROLLBACK TO SAVEPOINT.
3. Statement closing tries to close the nonexistent cursor.
This causes the current transaction to go into the FAILED state, if it
is not already in that state,
thus making things possibly worse.

This occurred to me, when I was playing with UseDeclareFetch=1 and with
savepoint testing.
There the ODBC did try to roll back the nonexistent cursor. Maybe the
transaction
was in the failed state at that point.

As you already might have read my previous email, UseDeclareFetch=1
doesn't work
with the current libpq ODBC driver. So I can't test it's behaviour now.

Here is one scenario with UseDeclareFetch=1 and Fetch=2, but I don't
know  wether it works,
because I can't test it. The backend fetches at most 2 rows per fetch.

NEW_STMT_HANDLE=SELECT * from test1 LIMIT 20000000;
foreach row in CC ; do
    SAVEPOINT S1;
    INSERT each fetched row INTO TABLE test2;
    RELEASE S1 (or ROLLBACK TO S1 on failure).
done
DROP_STMT

With the above pseudo code, psqlodbc FETCHes twenty million rows
from the backend with two rows per internal PSQLODBC fetch. A low
memory consumption can be garanteed on the client side with 32 bit
computers.

Unfortunately PostgreSQL uses here indexes, because it assumes, that using
cursors means that only a fraction will be fetched.
Without UseDeclareFetch, the program would crash.

Conclusion:
These problem scenarios occur only with ODBC statement handle maintained
cursors.
Do you know, wether there any other such uses, than the UseDeclareFetch=1?

If the internal cursor is created and closed within the one single ODBC
call, this problem
scenario doesn't occur, or it can be fixed with easy coding.

When the ODBC using program has the responsibility on tracking and
closing the open CURSORs,
the savepoints are able to work nicely without the savepoint
maintenance code inside psqlodbc. The application programmer knows,
when it's cursors exist, and when not.

So maybe I have kept this problem too big, and tried to solve it
with too much work after all.

Regards,
Marko


pgsql-odbc by date:

Previous
From: "Dave Page"
Date:
Subject: Re: Migration from odbc driver 7.x to 8.x
Next
From: Marko Ristola
Date:
Subject: Re: Continuing encoding fun....