On Wed, 2010-08-04 at 14:00 -0400, Tom Lane wrote:
> Hannu Krosing <hannu@2ndquadrant.com> writes:
> > Of course there are more variables than just *_page_cost, so if you nail
> > down any other one, you may end with less than 1 for both page costs.
>
> > I have always used seq_page_cost = 1 in my thinking and adjusted others
> > relative to it.
>
> Right, seq_page_cost = 1 is sort of the traditional reference point,
> but you don't have to do it that way. The main point here is that for
> an all-in-RAM database, the standard page access costs are too high
> relative to the CPU effort costs:
>
> regression=# select name, setting from pg_settings where name like '%cost';
> name | setting
> ----------------------+---------
> cpu_index_tuple_cost | 0.005
> cpu_operator_cost | 0.0025
> cpu_tuple_cost | 0.01
> random_page_cost | 4
> seq_page_cost | 1
> (5 rows)
>
> To model an all-in-RAM database, you can either dial down both
> random_page_cost and seq_page_cost to 0.1 or so, or set random_page_cost
> to 1 and increase all the CPU costs. The former is less effort ;-)
>
> It should be noted also that there's not all that much evidence backing
> up the default values of the cpu_xxx_cost variables. In the past those
> didn't matter much because I/O costs always swamped CPU costs anyway.
> But I can foresee us having to twiddle those defaults and maybe refine
> the CPU cost model more, as all-in-RAM cases get more common.
Especially the context switch + copy between shared buffers and system
disk cache will become noticeable at these speeds.
An easy way to test it is loading a table with a few indexes, once with
a shared_buffers value, which is senough for only the main table and
once with one that fits both table and indexes,
> regards, tom lane
--
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
Services, Consulting and Training