-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
> Your argument seems to be "this produces nice results for me", not
> "I have done experiments to measure the actual value of the parameter
> and it is X". I *have* done experiments of that sort, which is where
> the default of 4 came from. I remain of the opinion that reducing
> random_page_cost is a band-aid that compensates (but only partially)
> for problems elsewhere. We can see that it's not a real fix from
> the not-infrequent report that people have to reduce random_page_cost
> below 1.0 to get results anywhere near local reality. That doesn't say
> that the parameter value is wrong, it says that the model it's feeding
> into is wrong.
Good points: allow me to rephrase my question then:
When I install a new version of PostgreSQL and start testing my
applications, one of the most common problems is that many of my queries
are not hitting an index. I typically drop random_page_cost to 2 or
lower and this speeds things very significantly. How can I determine a
better way to speed up my queries, and why would this be advantageous
over simply dropping random_page_cost? How can I use my particular
situation to help develop a better model and perhaps make the defaults
work better for my queries and other people with databaes like mine.
(fairly simple schema, not too large (~2 Gig total), SCSI, medium to
high complexity queries, good amount of RAM available)?
- --
Greg Sabino Mullane greg@turnstep.com
PGP Key: 0x14964AC8 200503150600
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----
iD8DBQFCNsCbvJuQZxSWSsgRAs0sAJwLFsGApzfYNV5jPL0gGVW5BH37hwCfRSW8
ed3sLnMg1UOTgN3oL9JSIFo=
=cZIe
-----END PGP SIGNATURE-----