Hi Tom,
Thanks for the help, Tom.
>The major issue seems to be in the sub-selects:
> -> Seq Scan on merchant_purchase mp (cost=0.00..95.39 rows=44 width=4) (actual time=2.37..2.58 rows=6 loops=619)
> Filter: (merchant_id = $0)
>where the estimated row count is a factor of 7 too high. If the
>estimated row count were even a little lower, it'd probably have gone
>for an indexscan.
I understand that the sub-selects are taking up most of the time as they do a sequential scan on the tables.
>You might get some results from increasing the
>statistics target for merchant_purchase.merchant_id.
Do I have to use vacuum analyze to update the statistics? If so, I have already tried that and it doesn't seem to help.
>If that doesn't help, I'd think about reducing random_page_cost a little bit.
I am sorry, I am not aware of what random_page_cost is, as I am new to Postgres. What does it signify and how do I reduce random_page_cost?
Thanks,
Saranya
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com