Shams Khan wrote:
> *PERFORMANCE WAS BOOSTED UP DRASTICALLY* ---when I edited the
> work_mem to 100 MB---just look at the difference;
You only showed EXPLAIN output, which only shows estimated costs.
As already suggested, try running both ways with EXPLAIN ANALYZE --
which will show both estimates and actual.
> One more thing Kevin, could you please help me out to understand
> how did calculate those parameters?
My own experience and reading about the experiences of others. If
you follow the pgsql-performance list, you will get a better "gut
feel" on these issues as well as picking up techniques for problem
solving. Speaking of which, that would have been a better list to
post this on. The one actual calculation I did was to make sure
work_mem was less than RAM * 0.25 / max_connections. I didn't go
all the way to that number because 100MB is enough for most
purposes and your database isn't very much smaller than your RAM.
You know, the melding of a routine calculation with gut feel. :-)
> Without more info, there's a bit of guesswork, but...
> What exta info is required...please let me know...
The main things I felt I was missing was a description of your
overall workload and EXPLAIN ANALYZE output from a "typical" slow
query.
There's a page about useful information to post, though:
http://wiki.postgresql.org/wiki/SlowQueryQuestions
Now that you have somewhat reasonable tuning for the overall
server, you can look at the EXPLAIN ANALYZE output of queries which
don't run as fast as you thing they should be able to do, and see
what adjustments to cost factors you might need to make. With the
numbers you previously gave, a wild guess would be that you'll get
generally faster run-times with these settings:
seq_page_cost = 0.1
random_page_cost = 0.1
cpu_tuple_cost = 0.5
Be sure to look at actual run times, not EXPLAIN cost estimates.
-Kevin