I am trying to optimize performance on larger selects sets:
If i connect with *...UseDeclareFetch=0...* SQLGetDiagField(...
SQL_DIAG_CURSOR_ROW_COUNT...) or SQLRowCount deliver the number of the rows
which where found by the SELECT. Reading rowcount is needed by my ODBC
wrapper.
BUT: The SELECT needs MUCH more time(vs.UseDeclareFetch=1) and a lot of
memory(~100MByte) is eaten up until the cursor ist closed. So far as i have
understand, the result set is read by the client, which may also a problem
on slow connections to the server. Therefore the amount of cached rows
should be controlled by *Fetch=XXX* assigned to the connection string. But
whatever i assigned to "Fetch", the return time for the SELECT and memory
usage is nearly the same.
Now i have tried *...UseDeclareFetch=1...*. The return time is MUCH better.
BUT: Now SQLRowCount returns -1.
One idea: Executing Select ...,(Select count(*) where {MyConditions}) as
__ROWCOUNT where {MyConditions}. I suppose that would double the time on a
complicate evaluation.
Any ideas how to get the RowCount?
Or to optimize Selects with UseDeclareFetch=0.
I'm quite a newbie to PostgreSQL and i am nearly sure that i have overlook
something.
Any help would be greatly welcome!
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/RowCount-UseDeclareFetch-Performance-tp4553904p4553904.html
Sent from the PostgreSQL - odbc mailing list archive at Nabble.com.