Re: Query performance over a large proportion of data - Mailing list pgsql-performance

From Tom Lane
Subject Re: Query performance over a large proportion of data
Date
Msg-id 17384.1236730144@sss.pgh.pa.us
Whole thread Raw
In response to Re: Query performance over a large proportion of data  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
List pgsql-performance
"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:
> You probably need to reduce random_page_cost.  If your caching is
> complete enough, you might want to set it equal to seq_page_cost
> (never set it lower that seq_page_cost!) and possibly reduce both of
> these to 0.1.

> Some people have had good luck with boosting cpu_tuple_cost and
> cpu_index_tuple_cost.  (I've seen 0.5 for both recommended.)  I've
> never had to do that, but if the earlier suggestions don't get good
> plans, you might try that.

It might be worth pointing out here that all that matters are the
relative values of the various xxx_cost parameters.  If your DB is
mostly or entirely cached, you probably want to lower the estimated cost
of I/O relative to CPU work.  You can do that *either* by dropping the
seq_/random_page_costs, *or* by raising the cpu_xxx_costs (there are
more than two of those BTW).  Doing both, as Kevin's comments might be
read to suggest, is not useful ... and in particular I bet that having
seq_page_cost actually less than cpu_tuple_cost would lead to some
pretty wacko decision-making by the planner.

See
http://www.postgresql.org/docs/8.3/static/runtime-config-query.html
for some more info about what you're twiddling here.

            regards, tom lane

pgsql-performance by date:

Previous
From: Scott Marlowe
Date:
Subject: Re: Query performance over a large proportion of data
Next
From: Tom Lane
Date:
Subject: Re: Query performance over a large proportion of data