2011/12/7 Raj Mathur (राज माथुर) <raju@linux-delhi.org>:
> QUERY PLAN
>
---------------------------------------------------------------------------------------------------------------------------------------------
> Limit (cost=46782.15..46782.40 rows=100 width=109) (actual time=4077.866..4078.054
> rows=100 loops=1)
> -> Sort (cost=46782.15..46785.33 rows=1272 width=109) (actual time=4077.863..4077.926
> rows=100 loops=1)
> Sort Key: cdr.calldate, cdr2.calldate, cdr.clid
> Sort Method: top-N heapsort Memory: 42kB
> -> Merge Join (cost=2.95..46733.54 rows=1272 width=109) (actual
> time=0.070..3799.546 rows=168307 loops=1)
Two things to look at here. First is that the estimation of rows
expected and returned vary by a factor over over 100, which means the
query planner may be making suboptimal choices in terms of the plan it
is running. If increasing stats target on the target columns in the
query helps, then that's worth trying. Raise it and re-analyze and
see if you get a closer estimate. To test if the merge join is the
best choice or not, you can use the set enable_xxx for it (in this
case set enable_mergejoin=off) and then run the query again through
explain analyze and see if the performance gets any better.