Thread: Compression on ODBC?
Hi, I have an application which dials a modem, and connects to Postgres using ODBC. When I call the SQLExecDirect function to get the (quite large: 10000 rows) result set, it takes a long time to execute. The query itself is indexed and runs very quickly from psql. I presume the delay is because the ODBC is downloading all the rows. Is there any way to either: 1) Use compression on the recordset to make the call faster? 2) Download each row one at a time, when SQLFetch is called? Also I don't seem to be able to find an ODBC function to return the number of records in a results set. Is there one? TIA, Mark.
Hi, i tried to install postgresql in my win98 using cygwin. But, it failed. Anyone can help? <Message when run ./configure in sh> loading cache ./config.cache checking host system type... i486-pc-cygwin checking echo setting... checking setting template to... cygwin32 checking whether to support locale... disabled checking whether to support cyrillic recode... disabled checking whether to support multibyte... disabled checking setting DEF_PGPORT... 5432 checking setting DEF_MAXBACKENDS... 32 checking setting USE_TCL... disabled checking setting USE_PERL... disabled checking setting USE_ODBC... enabled checking setproctitle... disabled checking setting ODBCINST... checking setting ASSERT CHECKING... disabled checking for gcc... gcc checking whether the C compiler (gcc -O2 ) works... no configure: error: installation or configuration problem: c compiler cannot create executables.
> ... ODBC is downloading all the rows. Is there any way to either: > 1) Use compression on the recordset to make the call faster? You can run through an ssh tunnel, which (I'm guessing) may compress the result. > 2) Download each row one at a time, when SQLFetch is called? Use cursors. afaik, the ODBC driver *does* allow an app to retrieve rows as they are available on the wire (it implements its own wire interface to allow this, for historical reasons). But I'll guess that the app is waiting for the complete set. Good luck! - Thomas
Mark Alliban wrote: > Hi, > > I have an application which dials a modem, and connects to Postgres using > ODBC. When I call the SQLExecDirect function to get the (quite large: 10000 > rows) result set, it takes a long time to execute. The query itself is > indexed and runs very quickly from psql. I presume the delay is because the > ODBC is downloading all the rows. Is there any way to either: > > 1) Use compression on the recordset to make the call faster? > 2) Download each row one at a time, when SQLFetch is called? > > Also I don't seem to be able to find an ODBC function to return the number > of records in a results set. Is there one? > > TIA, > Mark. Its been a while since I have used the ODBC driver, but I believe if you set the Declare/Fetch driver option you will get a better response time. This option uses a cursor to bring in the rows in set. I believe the default is to cache 100 rows at a time - on demand. Kind of a lazy retrieval thing. The side effect of using this option is that as long as that record set is in use, you must open another separate connection to perform other database operations. Hope this helps.
"David C. Hartwig Jr" wrote: > Mark Alliban wrote: > > > Hi, > > > > I have an application which dials a modem, and connects to Postgres using > > ODBC. When I call the SQLExecDirect function to get the (quite large: 10000 > > rows) result set, it takes a long time to execute. The query itself is > > indexed and runs very quickly from psql. I presume the delay is because the > > ODBC is downloading all the rows. Is there any way to either: > > > > 1) Use compression on the recordset to make the call faster? > > 2) Download each row one at a time, when SQLFetch is called? > > > > Also I don't seem to be able to find an ODBC function to return the number > > of records in a results set. Is there one? > > > > TIA, > > Mark. > > Its been a while since I have used the ODBC driver, but I believe if you set > the Declare/Fetch driver option you will get a better response time. This > option uses a cursor to bring in the rows in set. I believe the default is > to cache 100 rows at a time - on demand. Kind of a lazy retrieval thing. > The side effect of using this option is that as long as that record set is in > use, you must open another separate connection to perform other database > operations. Hope this helps. Correction. You won't need to open another connection unless you plan to do separate transactions with in the scope of an open query statement. PostgreSQL requires cursors to be executed within a truncation block. As such the ODBC driver opens a truncation for all queries when using Declare/Fetch. And since PostgreSQL does not support nested transactions, additional transactions may not be invoked within the scope of an open query statement.