Thread: Query optimization problem

Query optimization problem

From
Peter
Date:
I'm not sure if I'm heading up the right alley - seems too simple!, but
here is my issue.

I have about 3000 records in 'mytable', and simple

select * from mytable where x=1

is timed as:

Total query runtime: 2933 ms.
Data retrieval runtime: 791 ms.

EXPLAIN says it's Seq Scan, but the actual filter expression I'm using
returns me all rows from the table anyway. If I run

select * from mytable

Total query runtime: 3444 ms.
Data retrieval runtime: 771 ms.

At the same time:

select * into x from prl_user_entities

Query returned successfully with no result in 600 ms.

Why SELECT takes 3+ second to execute? Is it something to do with my
Postgres server optimization, or PgAdmin does not show correct data
retrieval runtime (leaks over into query runtime or something)?

This is PG 8.1 on FreeBSD, server is fairly powerful PC.

Peter

Re: Query optimization problem

From
Richard Huxton
Date:
Peter wrote:
>
> Query returned successfully with no result in 600 ms.
>
> Why SELECT takes 3+ second to execute? Is it something to do with my
> Postgres server optimization, or PgAdmin does not show correct data
> retrieval runtime (leaks over into query runtime or something)?

Data transfer and display time usually.

Try SELECT count(*) FROM ... instead and see if the times are closer.

--
   Richard Huxton
   Archonet Ltd

Re: Query optimization problem

From
Dave Page
Date:
Richard Huxton wrote:
> Peter wrote:
>>
>> Query returned successfully with no result in 600 ms.
>>
>> Why SELECT takes 3+ second to execute? Is it something to do with my
>> Postgres server optimization, or PgAdmin does not show correct data
>> retrieval runtime (leaks over into query runtime or something)?
>
> Data transfer and display time usually.
>
> Try SELECT count(*) FROM ... instead and see if the times are closer.
>

The display time in pgAdmin 1.6 and above is negligible (ie.
microseconds). That's why it no longer shows 2 times as previous
versions did.

Regards, Dave.

Re: Query optimization problem

From
Ron Johnson
Date:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 02/02/07 07:22, Richard Huxton wrote:
> Peter wrote:
>>
>> Query returned successfully with no result in 600 ms.
>>
>> Why SELECT takes 3+ second to execute? Is it something to do with my
>> Postgres server optimization, or PgAdmin does not show correct data
>> retrieval runtime (leaks over into query runtime or something)?
>
> Data transfer and display time usually.
>
> Try SELECT count(*) FROM ... instead and see if the times are closer.

Or pipe the output to a file.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFw0ajS9HxQb37XmcRAuTNAJ9taelU4v8ZDwhRmsUohVwyA7S3jgCffzcq
gMLAtTaROOeElC27rCGr58s=
=K4wO
-----END PGP SIGNATURE-----

Re: Query optimization problem

From
Peter
Date:
>> Query returned successfully with no result in 600 ms.
>>
>> Why SELECT takes 3+ second to execute? Is it something to do with my
>> Postgres server optimization, or PgAdmin does not show correct data
>> retrieval runtime (leaks over into query runtime or something)?
>
> Data transfer and display time usually.
>
> Try SELECT count(*) FROM ... instead and see if the times are closer.


Correct. That executes in 300ms flat:

Total query runtime: 301 ms.
Data retrieval runtime: 380 ms.
1 rows retrieved.

Peter

Re: Query optimization problem

From
Peter
Date:
 >> I'm not sure if I'm heading up the right alley - seems too simple!,
but here is my issue.
 >>
 >> I have about 3000 records in 'mytable', and simple
 >

 >Number of rows is not the most important thing here - the number of
 >occupied disc pages is (you can have a lot of small rows or a small
 >nubmer of large rows occupying the same space).

This table contains two varchar fields, that's all

 >> select * from mytable where x=1
 >>
 >> is timed as:
 >>
 >> Total query runtime: 2933 ms.
 >> Data retrieval runtime: 791 ms.
 >>
 >> EXPLAIN says it's Seq Scan, but the actual filter expression I'm
using returns me all rows from the table anyway. If I run
 >>
 >> select * from mytable
 >>
 >> Total query runtime: 3444 ms.
 >> Data retrieval runtime: 771 ms.
 >
 > Please post here EXPLAIN ANALYZE output for these, it's difficult to
guess the cause without it.

QUERY PLAN
Seq Scan on mytable (cost=0.00..56.23 rows=2898 width=19) (actual
time=0.012..5.762 rows=2898 loops=1)
   Filter: ((user_id)::text = 'test2'::text)
Total runtime: 10.014 ms


 >> At the same time:
 >>
 >> select * into x from prl_user_entities
 >
 > Is this a different table or just a mistype?

Typo. Sorry.

 >
 >> Query returned successfully with no result in 600 ms.
 >>
 >> Why SELECT takes 3+ second to execute? Is it something to do with my
Postgres server optimization, or PgAdmin does not show correct data
retrieval runtime (leaks over into query runtime or something)?
 >
 > As someone already poitned out, this overhead is probably caused by
fact that the data have to be transmitted to the client in the first
case, but with 'SELECT INTO' almost no data are sent over the connection
(it all happens in the server).


I assumed the same thing. However, 'data retrieval runtime' as reported
by PgAdmin is really small compared to 'query runtime'... I would expect
  it to be other way around

Thanks!
Peter

Re: Query optimization problem

From
Tom Lane
Date:
Peter <peter@greatnowhere.com> writes:
>> Try SELECT count(*) FROM ... instead and see if the times are closer.

> Correct. That executes in 300ms flat:

So your problem is data transfer, not the query per se.

How old a pgAdmin are you using?

            regards, tom lane

Re: Query optimization problem

From
Peter
Date:
>>> Try SELECT count(*) FROM ... instead and see if the times are closer.
>
>> Correct. That executes in 300ms flat:
>
> So your problem is data transfer, not the query per se.

Well, based on PgAdmin times I suspected some sort of heavy 'data
preparation before it's sent out' overhead
>
> How old a pgAdmin are you using?

1.4.3

I tried 1.5 but it was too slow rendering results from SELECT queries...
maybe it's fixed by now

Peter

Re: Query optimization problem

From
"Dave Page"
Date:

> ------- Original Message -------
> From: Peter <peter@greatnowhere.com>
> To: pgsql-general@postgresql.org
> Sent: 03/02/07, 10:58:08
> Subject: Re: [GENERAL] Query optimization problem
>
> >>> Try SELECT count(*) FROM ... instead and see if the times are closer.
> >
> >> Correct. That executes in 300ms flat:
> >
> > So your problem is data transfer, not the query per se.
>
> Well, based on PgAdmin times I suspected some sort of heavy 'data
> preparation before it's sent out' overhead
> >
> > How old a pgAdmin are you using?
>
> 1.4.3
>
> I tried 1.5 but it was too slow rendering results from SELECT queries...
> maybe it's fixed by now

1.5 was development code - it could have been broken in any number of ways. Get 1.6.2 - it has redesigned grid
renderingcode which eliminates rendering time from the equation. 

Regards, Dave