Thread: slow query

slow query

From
Sumeet
Date:
Hi all<br /><br />I ran a simple query like the one displayed below and it takes a lot of time to execute on this
table.<br/>This table has 48 million records....and i worry about this table a lot coz i need to perform join on this
<br/>table with some other table having around 13 million records....I've tried vacuuming this table many time<br />but
dinthelp me much...is there any other trick to speed up this table....once i tried deleting and restoring a table <br
/>andit worked perfectly fine for a small table of size upto 5-6 million records<br /><br />sm=> explain analyze
select* from ma limit 10;<br />                                                             QUERY
PLAN                                                             <br
/>-------------------------------------------------------------------------------------------------------------------------------------<br
/> Limit (cost=0.00..0.45 rows=10 width=76) (actual time=21985.292..22204.340 rows=10 loops=1) <br />   ->  Seq Scan
onma  (cost=0.00..2181956.92 rows=48235392 width=76) (actual time=21985.285..22204.308 rows=10 loops=1)<br /> Total
runtime:22204.476 ms<br />(3 rows)<br /><br clear="all" /><br />-- <br />Thanks,<br />Sumeet  

Re: slow query

From
"A. Kretschmer"
Date:
am  Wed, dem 04.04.2007, um 23:17:54 -0400 mailte Sumeet folgendes:
> sm=> explain analyze select * from ma limit 10;
>                                                              QUERY
> PLAN                                                             
>
-------------------------------------------------------------------------------------------------------------------------------------
>  Limit  (cost=0.00..0.45 rows=10 width=76) (actual time=21985.292..22204.340
> rows=10 loops=1)
>    ->  Seq Scan on ma  (cost=0.00..2181956.92 rows=48235392 width=76) (actual
> time=21985.285..22204.308 rows=10 loops=1)
>  Total runtime: 22204.476 ms
> (3 rows)

which version?

I have 8.1 and on a table with 1.9 million records i got:


scholl=# explain analyse select * from bde_meldungen limit 10;
QUERYPLAN
 

----------------------------------------------------------------------------------------------------------------------------Limit
(cost=0.00..0.34 rows=10 width=157) (actual time=19.570..19.660 rows=10 loops=1)  ->  Seq Scan on bde_meldungen
(cost=0.00..65748.33rows=1957933 width=157) (actual time=19.565..19.587 rows=10 loops=1)Total runtime: 19.845 ms
 
(3 rows)


If I remember correctly, since 8.0 or 8.1 we have a massive improvement with
LIMIT.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net


Re: slow query

From
Tom Lane
Date:
"A. Kretschmer" <andreas.kretschmer@schollglas.com> writes:
> am  Wed, dem 04.04.2007, um 23:17:54 -0400 mailte Sumeet folgendes:
>> sm=> explain analyze select * from ma limit 10;
>> QUERY
>> PLAN                                                             
>>
-------------------------------------------------------------------------------------------------------------------------------------
>> Limit  (cost=0.00..0.45 rows=10 width=76) (actual time=21985.292..22204.340
>> rows=10 loops=1)
>> ->  Seq Scan on ma  (cost=0.00..2181956.92 rows=48235392 width=76) (actual
>> time=21985.285..22204.308 rows=10 loops=1)
>> Total runtime: 22204.476 ms
>> (3 rows)

> which version?

I'm betting the problem is poor vacuuming practice leading to lots of
dead space.  There's no way it takes 22 sec to read 10 rows if the
table is reasonably dense.
        regards, tom lane


Re: slow query

From
"A. Kretschmer"
Date:
am  Thu, dem 05.04.2007, um  1:27:25 -0400 mailte Tom Lane folgendes:
> "A. Kretschmer" <andreas.kretschmer@schollglas.com> writes:
> > am  Wed, dem 04.04.2007, um 23:17:54 -0400 mailte Sumeet folgendes:
> >> sm=> explain analyze select * from ma limit 10;
> >> QUERY
> >> PLAN                                                             
> >>
-------------------------------------------------------------------------------------------------------------------------------------
> >> Limit  (cost=0.00..0.45 rows=10 width=76) (actual time=21985.292..22204.340
> >> rows=10 loops=1)
> >> ->  Seq Scan on ma  (cost=0.00..2181956.92 rows=48235392 width=76) (actual
> >> time=21985.285..22204.308 rows=10 loops=1)
> >> Total runtime: 22204.476 ms
> >> (3 rows)
> 
> > which version?
> 
> I'm betting the problem is poor vacuuming practice leading to lots of
> dead space.  There's no way it takes 22 sec to read 10 rows if the
> table is reasonably dense.

This was my first thought, but:


,----[  Quote  ]
| I've tried
| vacuuming this table many time
`----


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net


Re: slow query

From
Tom Lane
Date:
"A. Kretschmer" <andreas.kretschmer@schollglas.com> writes:
> am  Thu, dem 05.04.2007, um  1:27:25 -0400 mailte Tom Lane folgendes:
>> I'm betting the problem is poor vacuuming practice leading to lots of
>> dead space.  There's no way it takes 22 sec to read 10 rows if the
>> table is reasonably dense.

> This was my first thought, but:

> ,----[  Quote  ]
> | I've tried
> | vacuuming this table many time
> `----

[ shrug... ]  If the damage is already done, lazy VACUUM won't fix it.
        regards, tom lane


Re: slow query

From
Andrew Sullivan
Date:
On Thu, Apr 05, 2007 at 01:47:03AM -0400, Tom Lane wrote:
> [ shrug... ]  If the damage is already done, lazy VACUUM won't fix it.
> 

Also, if there are enough open transactions at any one time and
sufficient churn in the table, lazy VACUUM may not be able to keep
up.  (We had that experience with a table that was updated _very
very_ often.  The answer turned out to be to update less often. 
Aggregating queries that could use an index over a large number of
"expired" rows worked better than seqscans over large numbers of dead
tuples.)

A
-- 
Andrew Sullivan  | ajs@crankycanuck.ca
Information security isn't a technological problem.  It's an economics
problem.    --Bruce Schneier