Maybe those with more than 8 million row tables could move on into 64 bit
operating systems. Memory hogging would not be a problem anymore with a
big enough swap space.
So making sure the feature would not be active, would fix it.
PostgreSQL works with a bad performance with UseDeclareFetch by design.
With UseDeclareFetch, the backend assumes, that only a few rows will be
fetched.
Maybe users are not prepared to move on so quickly into 64 bit.
Now to the analyze of the problem:
The problem seems to be, that
with UseDeclareFetch=1
and Fetch=2, libpq psqlodbc driver does the FETCH
only once for the PostgreSQL backend.
The feature would be nice, if PGAPI_ExtendedFetch() could
fetch more tuples with FETCH from the backend,
when the first two tuples have been processed.
Now it just understands, that the FETCH returned two rows,
and after the two rows, it will not fetch more rows anymore.
So I tracked down the problem with a debugger into PGAPI_ExtendedFetch.
It seems that the ealier implementation was, that SQLFetch called somehow
QR_fetch_tuples() to fetch more rows from the Backend.
If QR_fetch_tuples() didn't return more rows, the fetching from the backend
would stop.
If the user application asks the number of rows, the ODBC driver is forced
to read everything into the memory.
Regards,
Marko Ristola
Hiroshi Saito wrote:
>Hi Marko.
>
>It is strange...
>
>
>
>>So I get the above result by configuring .odbc.ini:
>>[marko]
>>Fetch = 2
>>UseDeclareFetch = 1
>>
>>
>
>I do not find any problems by the driver for Windows.
>Probably, I think a portion peculiar to Linux version.??
>In windows, though CACHE is used as FETCH.
>
>Although I want to see the log, Anoop or Dave may be able to
>be distinguished immediately.:-)
>
>Regards,
>Hiroshi Saito
>
>
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 1: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo@postgresql.org so that your
> message can get through to the mailing list cleanly
>
>