Thread: VACUUM ANALYZE slows down query

VACUUM ANALYZE slows down query

From
werner fraga
Date:
Certain queries on my database get slower after
running a VACUUM ANALYZE. Why would this happen, and
how can I fix it?

I am running PostgreSQL 7.4.2 (I also seen this
problem on v. 7.3 and 8.0)

Here is a sample query that exhibits this behaviour
(here the query goes from 1 second before VACUUM
ANALYZE to 2 seconds after; there are other queries
that go from 20 seconds before to 800 seconds after):

==================================================

select ToolRepairRequest.RequestID, (Select
count(ToolHistory.HistoryID) from ToolHistory where
ToolRepairRequest.RepairID=ToolHistory.RepairID) as
CountOfTH
from ((ToolRepairRequest
  LEFT JOIN (ToolRepair
    LEFT JOIN ToolHistory on (ToolRepair.RepairID =
ToolHistory.RepairID)) on (ToolRepairRequest.RepairID
= ToolRepair.RepairID))
  LEFT JOIN ServiceOrder ON
(ToolRepairRequest.ServiceOrderID =
ServiceOrder.ServiceOrderID))
LEFT JOIN Tool ON (ToolRepairRequest.ToolID = Tool.ID)
where (ToolRepairRequest.StationID = 1303)

==================================================

Here are the EXPLAIN ANALYZE results:

Before VACUUM ANALYZE:

==================================================


                    QUERY PLAN


-----------------------------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop Left Join  (cost=3974.74..48055.42
rows=79 width=8) (actual time=359.751..1136.165
rows=1518 loops=1)
   ->  Nested Loop Left Join  (cost=3974.74..6175.84
rows=78 width=12) (actual time=359.537..1023.404
rows=1518 loops=1)
         ->  Merge Right Join  (cost=3974.74..5705.83
rows=78 width=16) (actual time=359.516..991.826
rows=1518 loops=1)
               Merge Cond: ("outer".repairid =
"inner".repairid)
               ->  Merge Left Join
(cost=3289.68..4949.83 rows=27907 width=4) (actual
time=302.058..840.706 rows=28000 loops=1)
                     Merge Cond: ("outer".repairid =
"inner".repairid)
                     ->  Index Scan using
toolrepair_pkey on toolrepair  (cost=0.00..1175.34
rows=26485 width=4) (actual time=0.063..130.516
rows=26485 loops=1)
                     ->  Sort  (cost=3289.68..3359.44
rows=27906 width=4) (actual time=301.965..402.228
rows=27906 loops=1)
                           Sort Key:
toolhistory.repairid
                           ->  Seq Scan on toolhistory
 (cost=0.00..1229.06 rows=27906 width=4) (actual
time=0.009..116.441 rows=27906 loops=1)
               ->  Sort  (cost=685.06..685.24 rows=74
width=16) (actual time=26.490..36.454 rows=1518
loops=1)
                     Sort Key:
toolrepairrequest.repairid
                     ->  Seq Scan on toolrepairrequest
 (cost=0.00..682.76 rows=74 width=16) (actual
time=0.039..20.506 rows=1462 loops=1)
                           Filter: (stationid = 1303)
         ->  Index Scan using serviceorder_pkey on
serviceorder  (cost=0.00..6.01 rows=1 width=4) (actual
time=0.008..0.009 rows=0 loops=1518)
               Index Cond: ("outer".serviceorderid =
serviceorder.serviceorderid)
   ->  Index Scan using tool_pkey on tool
(cost=0.00..6.01 rows=1 width=4) (actual
time=0.013..0.018 rows=1 loops=1518)
         Index Cond: ("outer".toolid = tool.id)
   SubPlan
     ->  Aggregate  (cost=524.17..524.17 rows=1
width=4) (actual time=0.032..0.035 rows=1 loops=1518)
           ->  Index Scan using th_repair_key on
toolhistory  (cost=0.00..523.82 rows=140 width=4)
(actual time=0.013..0.018 rows=1 loops=1518)
                 Index Cond: ($0 = repairid)
 Total runtime: 1147.350 ms
(23 rows)

==================================================


and after VACUUM ANALYZE:

==================================================


                       QUERY PLAN


-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Merge Left Join  (cost=18310.59..29162.44 rows=1533
width=8) (actual time=1886.942..2183.774 rows=1518
loops=1)
   Merge Cond: ("outer".toolid = "inner".id)
   ->  Sort  (cost=15110.46..15114.29 rows=1532
width=12) (actual time=1534.319..1539.461 rows=1518
loops=1)
         Sort Key: toolrepairrequest.toolid
         ->  Nested Loop Left Join
(cost=4050.79..15029.41 rows=1532 width=12) (actual
time=410.948..1527.360 rows=1518 loops=1)
               ->  Merge Right Join
(cost=4050.79..5800.48 rows=1532 width=16) (actual
time=410.926..1488.229 rows=1518 loops=1)
                     Merge Cond: ("outer".repairid =
"inner".repairid)
                     ->  Merge Left Join
(cost=3289.68..4946.79 rows=27907 width=4) (actual
time=355.606..1321.320 rows=28000 loops=1)
                           Merge Cond:
("outer".repairid = "inner".repairid)
                           ->  Index Scan using
toolrepair_pkey on toolrepair  (cost=0.00..1172.67
rows=26485 width=4) (actual time=0.108..235.096
rows=26485 loops=1)
                           ->  Sort
(cost=3289.68..3359.44 rows=27906 width=4) (actual
time=355.460..519.987 rows=27906 loops=1)
                                 Sort Key:
toolhistory.repairid
                                 ->  Seq Scan on
toolhistory  (cost=0.00..1229.06 rows=27906 width=4)
(actual time=0.016..129.811 rows=27906 loops=1)
                     ->  Sort  (cost=761.11..764.83
rows=1487 width=16) (actual time=30.447..35.695
rows=1518 loops=1)
                           Sort Key:
toolrepairrequest.repairid
                           ->  Seq Scan on
toolrepairrequest  (cost=0.00..682.76 rows=1487
width=16) (actual time=0.039..23.852 rows=1462
loops=1)
                                 Filter: (stationid =
1303)
               ->  Index Scan using serviceorder_pkey
on serviceorder  (cost=0.00..6.01 rows=1 width=4)
(actual time=0.009..0.010 rows=0 loops=1518)
                     Index Cond:
("outer".serviceorderid = serviceorder.serviceorderid)
   ->  Sort  (cost=3200.13..3267.24 rows=26844
width=4) (actual time=352.324..453.352 rows=24746
loops=1)
         Sort Key: tool.id
         ->  Seq Scan on tool  (cost=0.00..1225.44
rows=26844 width=4) (actual time=0.024..126.826
rows=26844 loops=1)
   SubPlan
     ->  Aggregate  (cost=6.98..6.98 rows=1 width=4)
(actual time=0.038..0.042 rows=1 loops=1518)
           ->  Index Scan using th_repair_key on
toolhistory  (cost=0.00..6.97 rows=2 width=4) (actual
time=0.016..0.021 rows=1 loops=1518)
                 Index Cond: ($0 = repairid)
 Total runtime: 2191.401 ms
(27 rows)

==================================================

Thanks for any assistance.

Walt



__________________________________
Do you Yahoo!?
Meet the all-new My Yahoo! - Try it today!
http://my.yahoo.com



Re: VACUUM ANALYZE slows down query

From
John Arbash Meinel
Date:
werner fraga wrote:

>Certain queries on my database get slower after
>running a VACUUM ANALYZE. Why would this happen, and
>how can I fix it?
>
>I am running PostgreSQL 7.4.2 (I also seen this
>problem on v. 7.3 and 8.0)
>
>Here is a sample query that exhibits this behaviour
>(here the query goes from 1 second before VACUUM
>ANALYZE to 2 seconds after; there are other queries
>that go from 20 seconds before to 800 seconds after):
>
>
>
First, try to attach your explain analyze as a textfile attachment,
rather than inline to prevent wrapping and make it easier to read.

Second, the problem is that it *is* getting a more accurate estimate of
the number of rows that are going to be returned, compare:

Plan 1:


>-----------------------------------------------------------------------------------------------------------------------------------------------------------------
> Nested Loop Left Join  (cost=3974.74..48055.42
>rows=79 width=8) (actual time=359.751..1136.165
>rows=1518 loops=1)
>
>
The planner was expecting 79 rows, but was actually getting 1518.

Plan 2:


>-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
> Merge Left Join  (cost=18310.59..29162.44 rows=1533
>width=8) (actual time=1886.942..2183.774 rows=1518
>loops=1)
>
>
It is predicting 1533 rows, and found 1518, a pretty good guess.

So the big issue is why does the planner think that a nested loop is
going to be more expensive than a merge join. That I don't really know.
I'm guessing some parameters like random_page_cost could be tweaked, but
I don't really know the criteria postgres uses for merge joins vs nested
loop joins.

>Thanks for any assistance.
>
>Walt
>
>
Hopefully someone can help a little better. In the mean time, you might
want to resend with an attachment. I know I had trouble reading your
explain analyze.

John
=:->


Attachment

Re: VACUUM ANALYZE slows down query

From
Tom Lane
Date:
John Arbash Meinel <john@arbash-meinel.com> writes:
> So the big issue is why does the planner think that a nested loop is
> going to be more expensive than a merge join. That I don't really know.

Well, with the increased (and much more accurate) rowcount estimate,
the estimated cost of the nestloop naturally went up a lot: it's
proportional to the number of rows involved.  It appears that the
estimated cost of the mergejoin actually went *down* quite a bit
(else it'd have been selected the first time too).  That seems odd to
me.  AFAIR the only reason that would happen is that given stats about
the distributions of the two join keys, the planner can recognize that
one side of the merge may not need to be run to completion --- for
example if one column ranges from 1..100 and the other only from 1..40,
you never need to look at the values 41..100 in the first table.

You can see in the explain output that this is indeed happening to some
extent:

   ->  Sort  (cost=3200.13..3267.24 rows=26844 width=4) (actual time=352.324..453.352 rows=24746 loops=1)
         Sort Key: tool.id
         ->  Seq Scan on tool  (cost=0.00..1225.44 rows=26844 width=4) (actual time=0.024..126.826 rows=26844 loops=1)

Only 24746 of the 26844 tool rows ever got read from the sort node (and
even that is probably overstating matters; if there are duplicate toolid
values in the lefthand input, as seems likely, then the same rows will
be pulled from the sort node multiple times).  However, when both sides
of the merge are being explicitly sorted, as is happening here, then not
running one side to completion does not save you much at all (since you
had to do the sort anyway).  The early-out trick only really wins when
you can quit early on a more incremental subplan, such as an indexscan.
So I'm pretty surprised that the planner made this pair of choices.
The estimated cost of the mergejoin shouldn't have changed much with the
addition of statistics, and so ISTM it should have been picked the first
time too.

Walt, is there anything proprietary about the contents of these tables?
If you'd be willing to send me a dump off-list, I'd like to dig through
what the planner is doing here.  There may be a bug somewhere in the
cost estimation code.

            regards, tom lane

Re: VACUUM ANALYZE slows down query

From
Tom Lane
Date:
I wrote:
> Well, with the increased (and much more accurate) rowcount estimate,
> the estimated cost of the nestloop naturally went up a lot: it's
> proportional to the number of rows involved.  It appears that the
> estimated cost of the mergejoin actually went *down* quite a bit
> (else it'd have been selected the first time too).  That seems odd to
> me.

Nah, I just can't count :-(.  What I forgot about was the sub-select in
the output list:

>> select ToolRepairRequest.RequestID, (Select
>> count(ToolHistory.HistoryID) from ToolHistory where
>> ToolRepairRequest.RepairID=ToolHistory.RepairID) as
>> CountOfTH

which shows up in the (un-analyzed) EXPLAIN output here:

   SubPlan
     ->  Aggregate  (cost=524.17..524.17 rows=1 width=4) (actual time=0.032..0.035 rows=1 loops=1518)
           ->  Index Scan using th_repair_key on toolhistory  (cost=0.00..523.82 rows=140 width=4) (actual
time=0.013..0.018rows=1 loops=1518) 
                 Index Cond: ($0 = repairid)

Now in this case the planner is estimating 79 rows out, so the estimated
cost of the nestloop plan includes a charge of 79*524.17 for evaluating
the subplan.  If we discount that then the estimated cost of the
nestloop plan is 3974.74..6645.99 (48055.42-79*524.17).

In the ANALYZEd case the subplan is estimated to be a lot cheaper:

   SubPlan
     ->  Aggregate  (cost=6.98..6.98 rows=1 width=4) (actual time=0.038..0.042 rows=1 loops=1518)
           ->  Index Scan using th_repair_key on toolhistory  (cost=0.00..6.97 rows=2 width=4) (actual
time=0.016..0.021rows=1 loops=1518) 
                 Index Cond: ($0 = repairid)

It's estimated to be needed 1533 times, but that still adds up to less
of a charge than before.  Discounting that, the mergejoin plan was
estimated at 18310.59..18462.10 (29162.44 - 1533*6.98).  So it's not
true that the estimated cost of the join went down in the ANALYZEd case.

Werner sent me a data dump off-list, and trawling through the planner I
got these numbers for the estimated costs without the output subquery:

without any statistics:
    mergejoin cost    9436.42 .. 9571.81
    nestloop cost    3977.74 .. 6700.71

with statistics:
    mergejoin cost    18213.04 .. 18369.73
    nestloop cost     4054.93 .. 24042.85

(these are a bit different from his results because of different ANALYZE
samples etc, but close enough)

So the planner isn't going crazy: in each case it chose what seemed the
cheapest total-cost plan.

            regards, tom lane