Thread: pgsql vs odbc and speed: clarification required

pgsql vs odbc and speed: clarification required

From
Glenn
Date:
Hi all
I get the gist from all the reading that using odbc - specifically with
access as a front end, but also generally, requires that apps be
designed so that a minimum of data is moved across the network,
presumably most data manipulation be done with passthrough queries or on
small recordsets, BUT are these times below indicative of a problem?

It takes me about 66 seconds using psql to select about 40,000 records
from a slow (P150) remote machine and dump them localy to a 10Mb html
file using
psql -h server -d mydb <myqry.sql >myqry.html

BUT.. From access it takes several hours. Using fetch and declare the
first hundred seem available after several minutes (say 30 minutes), but
to browse to the last record - is futile, no response after 6 hours -
and if I understand the log file correctly having only select 10700
records.

PgAdminII (which I understand relies on odbc) dies after say 30min with
'out of string space' error (presumably due to size of result)

The reason I need to fetch 40K records is that it is used in an "insert
into" statement - which If I execute from an access query is unusable
(presumably) due to the speed of the select. Its quicker to use psql to
create a text file, import it then export it across or insert into from
the local access table.
(using a passthrough takes about 1 minute).

After all that, the question is should the odbc link be _this_ slow
given the network is not the problem even though its only 10mb and the
server is a dog -- i.e. if psql can do it, is odbc not doing its just
slow or do I have a config/other problem?

--
Glenn <glenn@pip.com.au>


Re: pgsql vs odbc and speed: clarification required

From
"Henshall, Stuart - Design & Print"
Date:

Glenn wrote:
> Hi all
> I get the gist from all the reading that using odbc - specifically
> with access as a front end, but also generally, requires that apps be
> designed so that a minimum of data is moved across the network,
> presumably most data manipulation be done with passthrough queries or
> on small recordsets, BUT are these times below indicative of a
> problem?
>
> It takes me about 66 seconds using psql to select about 40,000 records
> from a slow (P150) remote machine and dump them localy to a 10Mb html
> file using psql -h server -d mydb <myqry.sql >myqry.html
>
> BUT.. From access it takes several hours. Using fetch and declare the
> first hundred seem available after several minutes (say 30 minutes),
> but to browse to the last record - is futile, no response after 6
> hours - and if I understand the log file correctly having only select
> 10700 records.
>
> PgAdminII (which I understand relies on odbc) dies after say 30min
> with 'out of string space' error (presumably due to size of result)
>
> The reason I need to fetch 40K records is that it is used in an
> "insert into" statement - which If I execute from an access query is
> unusable (presumably) due to the speed of the select. Its quicker to
> use psql to create a text file, import it then export it across or
> insert into from the local access table.
> (using a passthrough takes about 1 minute).
>
> After all that, the question is should the odbc link be _this_ slow
> given the network is not the problem even though its only 10mb and the
> server is a dog -- i.e. if psql can do it, is odbc not doing its just
> slow or do I have a config/other problem?
>
> --
> Glenn <glenn@pip.com.au>
>
It sounds like you might have logging or tracing switched on, which will have a BIG impact on performance. Check under both the global and datasource bits of the psqlODBC setup. Also check under the tracing tab of the ODBC manager that tracing is not operating.

hth,
- Stuart