Re: huge execution time difference with almost same plan - Mailing list pgsql-hackers

From Tom Lane
Subject Re: huge execution time difference with almost same plan
Date
Msg-id 15928.1094335359@sss.pgh.pa.us
Whole thread Raw
In response to huge execution time difference with almost same plan  (Tatsuo Ishii <t-ishii@sra.co.jp>)
Responses Re: huge execution time difference with almost same plan
List pgsql-hackers
Tatsuo Ishii <t-ishii@sra.co.jp> writes:
> I've been playing with OSDL DBT-3 for a while and found very strange
> phenomemon.

What can you tell us about the physical ordering of the tables involved?
The only way I can think of to explain these results is that "orders"
is more or less physically in order by its primary key, and "lineitem"
is also more or less ordered by the orders primary key (which might or
might not be the same as l_orderkey; I forget the schema of that
database).

You have

>                ->  Index Scan using orders_pkey on orders  (cost=0.00..65387.00 rows=53357 width=30) (actual
time=44.687..36364.312rows=57306 loops=1)
 
>                      Filter: ((o_orderdate >= '1997-10-01'::date) AND (o_orderdate < '1998-01-01'::date))

versus

>                ->  Index Scan using i_o_orderdate on orders  (cost=0.00..46100.56 rows=53357 width=30) (actual
time=215.502..650031.049rows=57306 loops=1)
 
>                      Index Cond: ((o_orderdate >= '1997-10-01'::date) AND (o_orderdate < '1998-01-01'::date))

Now in the first case that's going to be a full-table scan because
there are no index constraints.  The second one actually is being
selective and so should visit less than all the table rows.  The
only way the first one can be twenty times faster is if it's doing
nearly sequential access to the table while the other one is jumping
all over the place.

(BTW, what happens if you take your thumb off the scales and allow
the planner to use a seqscan like it wanted to?  Seems like that
should be faster than a full-table index scan.)

The costs for the other table are striking too:

>                ->  Index Scan using i_l_orderkey on lineitem  (cost=0.00..3.14 rows=3 width=11) (actual
time=4.628..4.628rows=1 loops=57306)
 
>                      Index Cond: ("outer".o_orderkey = lineitem.l_orderkey)
>                      Filter: (l_commitdate < l_receiptdate)

versus

>                ->  Index Scan using i_l_orderkey on lineitem  (cost=0.00..3.14 rows=3 width=11) (actual
time=22.064..22.064rows=1 loops=57306)
 
>                      Index Cond: ("outer".o_orderkey = lineitem.l_orderkey)
>                      Filter: (l_commitdate < l_receiptdate)

These are the identical plan but the second case is five times slower
(which adds up to a lot, over 57000+ iterations).  Again, I think the
only possible explanation is that the first plan is benefiting from
locality of access while the second is not.  That would have to be due
to the order of the probes in the first case being well-matched to the
physical table order whereas in the second case not.

Can you look at the data and see whether the pkey is correlated to
physical order while the orderdate is not?  (That seems a bit weird
but it's the only explanation I can see.)

BTW, the planner is aware of physical order considerations; it would
not be smart enough to realize the correlation between physical order of
two different tables, but it would have accounted for this in its
preference of pkey over orderdate for scanning the orders table.
That's why you had to knock random_page_cost down so far in order
to mislead it into choosing the slower scan...
        regards, tom lane


pgsql-hackers by date:

Previous
From: Hannu Krosing
Date:
Subject: Re: huge execution time difference with almost same plan
Next
From: Tom Lane
Date:
Subject: Re: APR 1.0 released