Hi.
I haven't read all you emails.
If you use psqlodbc, you need the following thing:
psqlodbc driver must be configured to support several millions of
records for 32 bit operating
systems:
ODBC handle must have:
UseDeclareFetch=1
Fetch=32
So this enables a feature, that does use about a constant amount of memory
during SELECT. You are able to query tens of millions of rows without a
memory
allocation failure. So the problem seems to be with you, a memory
allocation failure.
Without the above configuration, the crash point might come with 8
million of rows.
The crash point depends on the operating system and it's version as well as
the average size of one query result row in memory.
The above configuration option affects query plans also,
but at least there is no crash.
I don't know, wether the application itself supports so many rows.
psqlodbc should be fine
with the correct options. It would be nice, if you can verify that your
psqlodbc driver version
works with the constant amount of memory.
Regards, Marko Ristola
Greg Campbell wrote:
> I hope using the driver version Dave P. suggests solves your problem.
>
> It sounds difficult to troubleshoot. I would say use care when turning
> logging on at the server. Not so much because of resources, but you
> need to configure PostgreSQL for what you want to log. That is edit
> the postgresql.conf file. Note the log_statement parameter. You could
> start with just logging connections to see if any fail. It seems like
> it would be difficult to log millions of transactions to find one
> error. That's a hec of log file to look through. And turning on ODBC
> logging (client side via the ODBC Administrator) could (would) be even
> more taxing.
>