On Mon, Aug 29, 2005 at 11:07:17AM -0700, asif ali wrote:
> The database is on the same system.
> What I am doing is only "VACUUM analyze
> conversion_table"
>
> I did the the same thing on a newly created database.
> And got the same result. So after "VACUUM analyze"
> performance dropped.
> Please see this. Runtime changes from "7755.115" to
> "14859.291" ms
As has been pointed out a couple of times, you're getting a different
plan after VACUUM ANALYZE because the row count estimates are more
accurate. Unfortunately the more accurate estimates result in a
query plan that's slower than the plan for the less accurate
estimates. PostgreSQL *thinks* the plan will be faster but your
results show that it isn't, so you might need to adjust some of the
planner's cost constants.
A asked some questions that you didn't answer, so I'll ask them again:
What's your effective_cache_size setting?
What's your work_mem (8.x) or sort_mem (7.x) setting?
What's your random_page_cost setting?
How much available RAM does the machine have?
What version of PostgreSQL are you running?
Various tuning guides give advice on how to set the above and other
configuration variables. Here's one such guide:
http://www.powerpostgresql.com/PerfList/
--
Michael Fuhr