Re: Performance - Mailing list pgsql-performance

From Kevin Grittner
Subject Re: Performance
Date
Msg-id 4DA56DA8020000250003C783@gw.wicourts.gov
Whole thread Raw
In response to Re: Performance  (Ogden <lists@darkstatic.com>)
Responses Re: Performance  (Claudio Freire <klaussfreire@gmail.com>)
List pgsql-performance
Ogden <lists@darkstatic.com> wrote:

> In conclusion, I should keep my random_page_cost (3.0) to a value
> more than seq_page_cost (1.0)? Is this bad practice or will this
> suffice for my setup (where the database is much bigger than the
> RAM in the system)?

The idea is to adjust the costing factors to model the actual
relative costs of the various actions in your environment with your
workload.  The best way to determine whether your settings are good
is to gauge how happy the those using the database are with
performance.  :-)

The degree of caching has a large effect on the page costs.  We've
managed to keep the active portion of our databases cached to a
degree that we have always benefited by reducing the
random_page_cost to 2 or less.  Where the entire database is cached,
we get the best plans with seq_page_cost and random_page_cost set to
equal values in the 0.1 to 0.05 range.  We've occasionally needed to
bump the cpu_tuple_cost up a bit relative to other cpu costs, too.

On the other hand, I've seen reports of people who have found it
necessary to increase random_page_cost to get good plans.  These
have been people with large databases where the entire database is
"active" (versus our databases where recent, active data is accessed
much more heavily than, say, 20 year old data).

If you model the costing to reflect the reality on your server, good
plans will be chosen.

-Kevin

pgsql-performance by date:

Previous
From: tv@fuzzy.cz
Date:
Subject: Re: Performance
Next
From: Tom Lane
Date:
Subject: Re: poor execution plan because column dependence