Greg Stark <gsstark@mit.edu> writes:
> You could also try lowering random_page_cost. Some people find as low as 1.2
> or so to be useful, but that would almost certainly be lying to postgres about
> the costs of random access and would cause it to use index scans aggressively
> even when they're not faster.
If the database is small enough to mostly fit in kernel disk cache, then
a very low random_page_cost is telling the truth, not lying. One might
also think about raising the various CPU_xxx cost parameters, since CPU
effort becomes proportionally much more significant when "I/O" only
requires a trip to kernel buffers.
However, Christoph's example looks to me like a classic case of testing
a toy table and assuming the results will extrapolate to production-size
cases. This table is small enough that it hardly matters which method
the planner chooses. I'd counsel being very wary about adjusting the
cost settings based on only this example.
regards, tom lane