Hi,
I am not using "UseDeclareFetch" and using "UseServerSidePrepare=1", but
we see over 40 mins for fetching over million rows between couple
powerful machines. Debug/trace is not in use either. I am using Redhat
Linux machines for both server and client. I am using prepare
statements. I using a small perl utility for testing, but results are
same with isql too.
I have read though the ODBC code, couple things I observed was
1) All the tuples from the server are pre-fetched before a single row is
returned to user. So, million rows are fetched to driver first (I was
not sure whether these are all memory resident or written to temp disk
space). So can somebody comment on memory implications? I see this
operation is completed fairly quickly, noting the fact that PG server is
responsive in returning and no network delays seen. We are also ran
independent tests to make sure we are running in full duplex.
2) From the debug logs, we seen significant time spent on "copy and
convert" operation, where the text response from PG server is being
converted to respective data types. Is there any way I can configure PG
database and ODBC driver to return the data in "binary" format, to
minimize this step? I do see CPU utilization being pegged during
operation.
Any way to re-compile *may be* to make both 1 & 2 above occur in
parallel to minimize times? Or any other techniques any one can suggest,
I would really appreciate it.
Thank you in advance for any advise you can share.
Ramesh..