Thread: Data transfer very slow when connected via DSL
Hello all, one of my customers installed Postgres on a public server to access the data from several places. The problem is that it takes _ages_ to transfer data from the database to the client app. At first I suspected a problem with the ODBC driver and my application, but using pgAdminIII 1.6.3.6112 (on Windows XP) gives the same result. In table "tblItem" there are exactly 50 records stored. The table has 58 columns: 5 character varying and the rest integer. As far as I can tell the Postgres installation is o.k. SELECT VERSION() "PostgreSQL 8.2.4 on i386-portbld-freebsd6.2, compiled by GCC cc (GCC) 3.4.6 [FreeBSD] 20060305" EXPLAIN ANALYZE SELECT * FROM "tblItem" "Seq Scan on "tblItem" (cost=0.00..2.50 rows=50 width=423) (actual time=0.011..0.048 rows=50 loops=1)" "Total runtime: 0.150 ms" The database computer is connected via a 2MBit SDL connection. I myself have a 768/128 KBit ADSL connection and pinging the server takes 150ms on average. In the pgAdminIII Query Tool the following command takes 15-16 seconds: SELECT * FROM "tblItem" During the first 2 seconds the D/L speed is 10-15KB/s. The remaining time the U/L and D/L speed is constant at 1KB/s. My customer reported that the same query takes 2-3 seconds for him (with 6MBit ADSL and 50ms ping). So my questions are: * Could there be anything wrong with the server configuration? * Is the ping difference between the customers and my machine responsible for the difference in the query execution time? * Is this normal behaviour or could this be improved somehow? Thanks in advance for any help. Rainer PS: I tried selecting only selected columns from the table and the speed is proportional to the no. of rows which must be returned. For example selecting all 5 character columns takes 2 seconds. Selecting 26 integer columns takes 7-8 seconds and selecting all integer columns takes 14 seconds.
Hello Rainer,
I do not have a solution, but I can confirm the problem :)
One PostgreSQL-Installation: Server 8.1 and 8.2 on Windows in the central; various others connected via VPN. Queries are subsecond when run locally (including data transfer), and up to 10 seconds and more via VPN, even in "off-hours"
The data-transfer is done via PG-Admin or via psycopg2 Python-Database adapter; nothing with ODBC or similiar in between.
I did not find a solution so far; and for bulk data transfers I now programmed a workaround.
Harald
--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2007 will take place in Vilnius, Lithuania from Monday 9th July to Wednesday 11th July. See you there!
The database computer is connected via a 2MBit SDL connection. I myself have a
768/128 KBit ADSL connection and pinging the server takes 150ms on average.
I do not have a solution, but I can confirm the problem :)
One PostgreSQL-Installation: Server 8.1 and 8.2 on Windows in the central; various others connected via VPN. Queries are subsecond when run locally (including data transfer), and up to 10 seconds and more via VPN, even in "off-hours"
The data-transfer is done via PG-Admin or via psycopg2 Python-Database adapter; nothing with ODBC or similiar in between.
I did not find a solution so far; and for bulk data transfers I now programmed a workaround.
Harald
--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2007 will take place in Vilnius, Lithuania from Monday 9th July to Wednesday 11th July. See you there!
Rainer Bauer <usenet@munnin.com> writes: > one of my customers installed Postgres on a public server to access the data > from several places. The problem is that it takes _ages_ to transfer data from > the database to the client app. At first I suspected a problem with the ODBC > driver and my application, but using pgAdminIII 1.6.3.6112 (on Windows XP) > gives the same result. I seem to recall that we've seen similar reports before, always involving Windows :-(. Check whether you have any nonstandard components hooking into the network stack on that machine. regards, tom lane
Hello Harald, >I do not have a solution, but I can confirm the problem :) At least that rules out any misconfiguration issues :-( >I did not find a solution so far; and for bulk data transfers I now >programmed a workaround. But that is surely based on some component installed on the server, isn't it? To be honest I didn't expect top performance, but the speed I got suggested some error on my part. Rainer
Hello Tom, >I seem to recall that we've seen similar reports before, always >involving Windows :-(. Check whether you have any nonstandard >components hooking into the network stack on that machine. I just repeated the test by booting into "Safe Mode with Network Support", but the results are the same. So I don't think that's the cause. Apart from that, what response times could I expect? Rainer
I wrote: >Hello Harald, > >>I do not have a solution, but I can confirm the problem :) > >At least that rules out any misconfiguration issues :-( I did a quick test with my application and enabled the ODBC logging. Fetching the 50 rows takes 12 seconds (without logging 8 seconds) and examining the log I found what I suspected: the performance is directly related to the ping time to the server since fetching one tuple requires a round trip to the server. Rainer PS: I wonder why pgAdminIII requires twice the time to retrieve the data.
Hi Rainer, but did you try to execute your query directly from 'psql' ?... Why I'm asking: seems to me your case is probably just network latency dependent, and what I noticed during last benchmarks with PostgreSQL the SELECT query become very traffic hungry if you are using CURSOR. Program 'psql' is implemented to not use CURSOR by default, so it'll be easy to check if you're meeting this issue or not just by executing your query remotely from 'psql'... Rgds, -Dimitri On 6/21/07, Rainer Bauer <usenet@munnin.com> wrote: > Hello Tom, > > >I seem to recall that we've seen similar reports before, always > >involving Windows :-(. Check whether you have any nonstandard > >components hooking into the network stack on that machine. > > I just repeated the test by booting into "Safe Mode with Network Support", > but > the results are the same. So I don't think that's the cause. > > Apart from that, what response times could I expect? > > Rainer > > ---------------------------(end of broadcast)--------------------------- > TIP 3: Have you checked our extensive FAQ? > > http://www.postgresql.org/docs/faq >
Hello Dimitri, >but did you try to execute your query directly from 'psql' ?... munnin=>\timing munnin=>select * from "tblItem"; <data snipped> (50 rows) Time: 391,000 ms >Why I'm asking: seems to me your case is probably just network latency >dependent, and what I noticed during last benchmarks with PostgreSQL >the SELECT query become very traffic hungry if you are using CURSOR. >Program 'psql' is implemented to not use CURSOR by default, so it'll >be easy to check if you're meeting this issue or not just by executing >your query remotely from 'psql'... Yes, see also my other post. Unfortunatelly this means that using my program to connect via DSL to the Postgres database is not possible. Rainer
Let's stay optimist - at least now you know the main source of your problem! :)) Let's see now with CURSOR... Firstly try this: munnin=>\timing munnin=>\set FETCH_COUNT 1; munnin=>select * from "tblItem"; what's the time you see here? (I think your application is working in this manner) Now, change the FETCH_COUNT to 10, then 50, then 100 - your query execution time should be better (at least I hope so :)) And if it's better - you simply need to modify your FETCH clause with adapted "FORWARD #" value (the best example is psql source code itself, you may find ExecQueryUsingCursor function implementation (file common.c))... Rgds, -Dimitri On 6/22/07, Rainer Bauer <usenet@munnin.com> wrote: > Hello Dimitri, > > >but did you try to execute your query directly from 'psql' ?... > > munnin=>\timing > munnin=>select * from "tblItem"; > <data snipped> > (50 rows) > Time: 391,000 ms > > >Why I'm asking: seems to me your case is probably just network latency > >dependent, and what I noticed during last benchmarks with PostgreSQL > >the SELECT query become very traffic hungry if you are using CURSOR. > >Program 'psql' is implemented to not use CURSOR by default, so it'll > >be easy to check if you're meeting this issue or not just by executing > >your query remotely from 'psql'... > > Yes, see also my other post. > > Unfortunatelly this means that using my program to connect via DSL to the > Postgres database is not possible. > > Rainer > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Have you searched our list archives? > > http://archives.postgresql.org >
Rainer Bauer wrote: > Hello Dimitri, > > >> but did you try to execute your query directly from 'psql' ?... >> > > munnin=>\timing > munnin=>select * from "tblItem"; > <data snipped> > (50 rows) > Time: 391,000 ms > > >> Why I'm asking: seems to me your case is probably just network latency >> dependent, and what I noticed during last benchmarks with PostgreSQL >> the SELECT query become very traffic hungry if you are using CURSOR. >> Program 'psql' is implemented to not use CURSOR by default, so it'll >> be easy to check if you're meeting this issue or not just by executing >> your query remotely from 'psql'... >> > > Yes, see also my other post. > > Unfortunatelly this means that using my program to connect via DSL to the > Postgres database is not possible. Note that I'm connected via wireless lan here at work (our wireless lan doesn't connecto to our internal lan directly due to PCI issues) then to our internal network via VPN. We are using Cisco with Cisco's vpn client software. I am running Fedora core 4 on my laptop and I can fetch 10,000 rather chubby rows (a hundred or more bytes) in about 7 seconds. So, postgresql over vpn works fine here. Note, no windows machines were involved in the making of this email. One is doing the job of tossing it on the internet when I hit send though.
Rainer Bauer <usenet@munnin.com> writes: > Fetching the 50 rows takes 12 seconds (without logging 8 seconds) and > examining the log I found what I suspected: the performance is directly > related to the ping time to the server since fetching one tuple requires a > round trip to the server. Hm, but surely you can get it to fetch more than one row at once? This previous post says that someone else solved an ODBC performance problem with UseDeclareFetch=1: http://archives.postgresql.org/pgsql-odbc/2006-08/msg00014.php It's not immediately clear why pgAdmin would have the same issue, though, because AFAIK it doesn't rely on ODBC. I just finished looking through our archives for info about Windows-specific network performance problems. There are quite a few threads, but the ones that were solved seem not to bear on your problem (unless the one above does). I found one pretty interesting thread suggesting that the problem was buffer-size dependent: http://archives.postgresql.org/pgsql-performance/2006-12/msg00269.php but that tailed off with no clear resolution. I think we're going to have to get someone to watch the problem with a packet sniffer before we can get much further. regards, tom lane
Tom, seems to me the problem here is rather simple: current issue depends completely on the low level 'implementation' of SELECT query in the application. In case it's implemented with using of "DECLARE ... CURSOR ..." and then "FETCH NEXT" by default (most common case) it brings application into "ping-pong condition" with database server: each next FETCH is possible only if the previous one is finished and server received feedback from client with explicit fetch next order. In this condition query response time became completely network latency dependent: - each packet send/receive has a significant cost - you cannot reduce this cost as you cannot group more data within a single packet and you waste your traffic - that's why TCP_NODELAY become so important here - with 150ms network latency the cost is ~300ms per FETCH (15sec(!) for 50 lines) You may think if you're working in LAN and your network latency is 0.1ms you're not concerned by this issue - but in reality yes, you're impacted! Each network card/driver has it's own max packet/sec traffic capability (independent to volume) and once you hit it - your response time may only degrade with more concurrent sessions (even if your CPU usage is still low)... The solution here is simple: - don't use CURSOR in simple cases when you just reading/printing a SELECT results - in case it's too late to adapt your code or you absolutely need CURSOR for some reasons: replace default "FETCH" or "FETCH NEXT" by "FETCH 100" (100 rows generally will be enough) normally it'll work just straight forward (otherwise check you're verifying PQntuples() value correctly and looping to read all tuples) To keep default network workload more optimal, I think we need to bring "FETCH N" more popular for developers and enable it (even hidden) by default in any ODBC/JDBC and other generic modules... Rgds, -Dimitri On 6/22/07, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Rainer Bauer <usenet@munnin.com> writes: > > Fetching the 50 rows takes 12 seconds (without logging 8 seconds) and > > examining the log I found what I suspected: the performance is directly > > related to the ping time to the server since fetching one tuple requires a > > round trip to the server. > > Hm, but surely you can get it to fetch more than one row at once? > > This previous post says that someone else solved an ODBC > performance problem with UseDeclareFetch=1: > http://archives.postgresql.org/pgsql-odbc/2006-08/msg00014.php > > It's not immediately clear why pgAdmin would have the same issue, > though, because AFAIK it doesn't rely on ODBC. > > I just finished looking through our archives for info about > Windows-specific network performance problems. There are quite a few > threads, but the ones that were solved seem not to bear on your problem > (unless the one above does). I found one pretty interesting thread > suggesting that the problem was buffer-size dependent: > http://archives.postgresql.org/pgsql-performance/2006-12/msg00269.php > but that tailed off with no clear resolution. I think we're going to > have to get someone to watch the problem with a packet sniffer before > we can get much further. > > regards, tom lane > > ---------------------------(end of broadcast)--------------------------- > TIP 2: Don't 'kill -9' the postmaster >
Hello Tom, >This previous post says that someone else solved an ODBC >performance problem with UseDeclareFetch=1: I thought about that too, but enabling UseDeclareFetch will slow down the query: it takes 30 seconds instead of 8. >It's not immediately clear why pgAdmin would have the same issue, >though, because AFAIK it doesn't rely on ODBC. No it doesn't. That's the reason I used it to verify the behaviour. But I remember Dave Page mentioning using a virtual list control to display the results and that means a round trip for every tuple. >I just finished looking through our archives for info about >Windows-specific network performance problems. I don't think it's a Windows-specific problem, because psql is doing the job blindingly fast. The problem lies in the way my application is coded. See the response to Dimitri for details. Rainer
Hello Dimitri, >Let's stay optimist - at least now you know the main source of your problem! :)) > >Let's see now with CURSOR... > >Firstly try this: >munnin=>\timing >munnin=>\set FETCH_COUNT 1; >munnin=>select * from "tblItem"; > >what's the time you see here? (I think your application is working in >this manner) That's it! It takes exactly 8 seconds like my program. I retrieve the data through a bound column: SELECT * FROM tblItem WHERE intItemIDCnt = ? After converting this to SELECT * FROM tblItem WHERE intItemIDCnt IN (...) the query is as fast as psql: 409ms So the problem is identified and the solution is to recode my application. Rainer PS: When enabling UseDeclareFetch as suggested by Tom then the runtime is still three times slower: 1192ms. But I guess that problem is for the ODBC list.
Rainer, but did you try initial query with FETCH_COUNT equal to 100?... Rgds, -Dimitri On 6/22/07, Rainer Bauer <usenet@munnin.com> wrote: > Hello Dimitri, > > >Let's stay optimist - at least now you know the main source of your > problem! :)) > > > >Let's see now with CURSOR... > > > >Firstly try this: > >munnin=>\timing > >munnin=>\set FETCH_COUNT 1; > >munnin=>select * from "tblItem"; > > > >what's the time you see here? (I think your application is working in > >this manner) > > That's it! It takes exactly 8 seconds like my program. > > I retrieve the data through a bound column: > SELECT * FROM tblItem WHERE intItemIDCnt = ? > > After converting this to > SELECT * FROM tblItem WHERE intItemIDCnt IN (...) > the query is as fast as psql: 409ms > > So the problem is identified and the solution is to recode my application. > > Rainer > > PS: When enabling UseDeclareFetch as suggested by Tom then the runtime is > still three times slower: 1192ms. But I guess that problem is for the ODBC > list. > > ---------------------------(end of broadcast)--------------------------- > TIP 7: You can help support the PostgreSQL project by donating at > > http://www.postgresql.org/about/donate >
Hello Dimitri, >Rainer, but did you try initial query with FETCH_COUNT equal to 100?... Yes I tried it with different values and it's like you suspected: FETCH_COUNT 1 Time: 8642,000 ms FETCH_COUNT 5 Time: 2360,000 ms FETCH_COUNT 10 Time: 1563,000 ms FETCH_COUNT 25 Time: 1329,000 ms FETCH_COUNT 50 Time: 1140,000 ms FETCH_COUNT 100 Time: 969,000 ms \unset FETCH_COUNT Time: 390,000 ms Rainer
Rainer Bauer wrote: >> It's not immediately clear why pgAdmin would have the same issue, >> though, because AFAIK it doesn't rely on ODBC. > > No it doesn't. That's the reason I used it to verify the behaviour. > > But I remember Dave Page mentioning using a virtual list control to display > the results and that means a round trip for every tuple. pgAdmin's Query Tool (which I assume you're using), uses an async query via libpq to populate a virtual table behind the grid. The query handling can be seen in pgQueryThread::execute() at http://svn.pgadmin.org/cgi-bin/viewcvs.cgi/trunk/pgadmin3/pgadmin/db/pgQueryThread.cpp?rev=6082&view=markup When the query completes, a dataset object (basically a wrapper around a PGresult) is attached to the grid control. As the grid renders each cell, it requests the value to display which results in a call to PQgetValue. This is how the old display time was eliminated - cells are only rendered when they become visible for the first time, meaning that the query executes in pgAdmin in the time it takes for the async query to complete plus (visible rows * visible columns)PQgetValue calls. > I don't think it's a Windows-specific problem, because psql is doing the job > blindingly fast. The problem lies in the way my application is coded. See the > response to Dimitri for details. I don't see why pgAdmin should be slow though - it should be only marginally slower than psql I would think (assuming there are no thinkos in our code that none of use ever noticed). Regards, Dave.
Dave Page wrote: > I don't see why pgAdmin should be slow though - it should be only > marginally slower than psql I would think (assuming there are no thinkos > in our code that none of use ever noticed). Nevermind... /D
Rainer,
Correct. I use a pyro-remote server. On request this remote server copies the relevant rows into a temporary table, uses a copy_to Call to push them into a StringIO-Objekt (that's Pythons version of "In Memory File"), serializes that StringIO-Objekt, does a bz2-compression and transfers the whole block via VPN.
I read on in this thread, and I scheduled to check on psycopg2 and what it is doing with cursors.
Harald
--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2007 will take place in Vilnius, Lithuania from Monday 9th July to Wednesday 11th July. See you there!
>I did not find a solution so far; and for bulk data transfers I now
>programmed a workaround.
But that is surely based on some component installed on the server, isn't it?
Correct. I use a pyro-remote server. On request this remote server copies the relevant rows into a temporary table, uses a copy_to Call to push them into a StringIO-Objekt (that's Pythons version of "In Memory File"), serializes that StringIO-Objekt, does a bz2-compression and transfers the whole block via VPN.
I read on in this thread, and I scheduled to check on psycopg2 and what it is doing with cursors.
Harald
--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2007 will take place in Vilnius, Lithuania from Monday 9th July to Wednesday 11th July. See you there!
>> I did not find a solution so far; and for bulk data transfers I now >> >programmed a workaround. >> >> But that is surely based on some component installed on the server, >> isn't >> it? >> > > Correct. I use a pyro-remote server. On request this remote server copies > the relevant rows into a temporary table, uses a copy_to Call to push > them > into a StringIO-Objekt (that's Pythons version of "In Memory File"), > serializes that StringIO-Objekt, does a bz2-compression and transfers the > whole block via VPN. > > I read on in this thread, and I scheduled to check on psycopg2 and what > it is doing with cursors. What about a SSH tunnel using data compression ? If you fetch all rows from a query in one go, would it be fast ? Also, PG can now COPY from a query, so you don't really need the temp table...
Hello Rainer, initially I was surprised you did not match non-CURSOR time with FETCH 100, but then thinking little bit the explanation is very simple - let's analyze what's going in both cases: Without CURSOR: 1.) app calls PQexec() with "Query" and waiting for the result 2.) PG sends the result to app, data arriving grouped into max possible big packets, network latency is hidden by huge amount per single send With CURSOR and FETCH 100: 1.) app calls PQexec() with "BEGIN" and waiting 2.) PG sends ok 3.) app calls PQexec() with "DECLARE cursor for Query" and waiting 4.) PG sends ok 5.) app calls PQexec() with "FETCH 100" and waiting 6.) PG sends the result of 100 rows to app, data arriving grouped into max possible big packets, network latency is hidden by huge data amount per single send 7.) no more data (as you have only 50 rows in output) and app calls PQexec() with "CLOSE cursor" and waiting 8.) PG sends ok 9.) app calls PQexec() with "COMMIT" and waiting 10.) PG sends ok as you see the difference is huge, and each step add your network latency delay. So, with "FETCH 100" we save only cost of steps 5 and 6 (default "FETCH 1" will loop here for all 50 rows adding 50x times latency delay again). But we cannot solve cost of other steps as they need to be executed one by one to keep execution logic and clean error handling... Hope it's more clear now and at least there is a choice :)) As well, if your query result will be 500 (for ex.) I think the difference will be less important between non-CURSOR and "FETCH 500" execution... Rgds, -Dimitri On 6/22/07, Rainer Bauer <usenet@munnin.com> wrote: > Hello Dimitri, > > >Rainer, but did you try initial query with FETCH_COUNT equal to 100?... > > Yes I tried it with different values and it's like you suspected: > > FETCH_COUNT 1 Time: 8642,000 ms > FETCH_COUNT 5 Time: 2360,000 ms > FETCH_COUNT 10 Time: 1563,000 ms > FETCH_COUNT 25 Time: 1329,000 ms > FETCH_COUNT 50 Time: 1140,000 ms > FETCH_COUNT 100 Time: 969,000 ms > > \unset FETCH_COUNT Time: 390,000 ms > > Rainer > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Have you searched our list archives? > > http://archives.postgresql.org >
PFC,
> What about a SSH tunnel using data compression ?
Setup on multiple Windows Workstations in multiple Installations is not possible.
> If you fetch all rows from a query in one go, would it be fast ?
I tried the same copy_to via VPN. It took 10-50x the time it took locally.
>Also, PG can now COPY from a query, so you don't really need the temp table...
I know, but was stuck to 8.1 on some servers.
Best wishes,
Harald
--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2007 will take place in Vilnius, Lithuania from Monday 9th July to Wednesday 11th July. See you there!
> Correct. I use a pyro-remote server. On request this remote server copies
> the relevant rows into a temporary table, uses a copy_to Call to push
> them
> into a StringIO-Objekt (that's Pythons version of "In Memory File"),
> serializes that StringIO-Objekt, does a bz2-compression and transfers the
> whole block via VPN.
> What about a SSH tunnel using data compression ?
Setup on multiple Windows Workstations in multiple Installations is not possible.
> If you fetch all rows from a query in one go, would it be fast ?
I tried the same copy_to via VPN. It took 10-50x the time it took locally.
>Also, PG can now COPY from a query, so you don't really need the temp table...
I know, but was stuck to 8.1 on some servers.
Best wishes,
Harald
--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2007 will take place in Vilnius, Lithuania from Monday 9th July to Wednesday 11th July. See you there!
Hello Dimitri, >Hope it's more clear now and at least there is a choice :)) >As well, if your query result will be 500 (for ex.) I think the >difference will be less important between non-CURSOR and "FETCH 500" >execution... The problem is that I am using ODBC and not libpq directly. I will have to rewrite most of the queries and use temporary tables in some places, but at least I know now what the problem was. Thanks for your help. Rainer
Rainer Bauer wrote: > Hello Dimitri, > >> Hope it's more clear now and at least there is a choice :)) >> As well, if your query result will be 500 (for ex.) I think the >> difference will be less important between non-CURSOR and "FETCH 500" >> execution... > > The problem is that I am using ODBC and not libpq directly. That opens up some questions. What ODBC driver are you using (with exact version please). Joshua D. Drake > > I will have to rewrite most of the queries and use temporary tables in some > places, but at least I know now what the problem was. > > Thanks for your help. > > Rainer > > ---------------------------(end of broadcast)--------------------------- > TIP 9: In versions below 8.0, the planner will ignore your desire to > choose an index scan if your joining column's datatypes do not > match > -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL Replication: http://www.commandprompt.com/products/
Hello Joshua, >That opens up some questions. What ODBC driver are you using (with exact >version please). psqlODBC 8.2.4.2 (build locally). I have restored the 8.2.4.0 from the official msi installer, but the results are the same. Rainer
Rainer, seeking psqlODBC code source it seems to work in similar way and have an option "SQL_ROWSET_SIZE" to execute FETCH query in the same way as "FETCH_COUNT" in psql. Try to set it to 100 and let's see if it'll be better... Rgds, -Dimitri On 6/22/07, Rainer Bauer <usenet@munnin.com> wrote: > Hello Joshua, > > >That opens up some questions. What ODBC driver are you using (with exact > >version please). > > psqlODBC 8.2.4.2 (build locally). > > I have restored the 8.2.4.0 from the official msi installer, but the results > are the same. > > Rainer > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Have you searched our list archives? > > http://archives.postgresql.org >
Hello Dimitri, >Rainer, seeking psqlODBC code source it seems to work in similar way >and have an option "SQL_ROWSET_SIZE" to execute FETCH query in the >same way as "FETCH_COUNT" in psql. Try to set it to 100 and let's see >if it'll be better... But that is only for bulk fetching with SQLExtendedFetch() and does not work for my case with a single bound column where each tuple is retrived individually by calling SQLFetch(). See <http://msdn2.microsoft.com/en-us/library/ms713591.aspx> Rainer