Re: cpu_tuple_cost - Mailing list pgsql-performance

From David Brown
Subject Re: cpu_tuple_cost
Date
Msg-id 42377293.1010801@bigpond.net.au
Whole thread Raw
In response to Re: cpu_tuple_cost  (Gregory Stark <gsstark@mit.edu>)
Responses Re: cpu_tuple_cost  (Greg Stark <gsstark@mit.edu>)
List pgsql-performance
Gregory Stark wrote:

>The "this day and age" argument isn't very convincing. Hard drive capacity
>growth has far outstripped hard drive seek time and bandwidth improvements.
>Random access has more penalty than ever.
>
>
In point of fact, there haven't been noticeable seek time improvements
for years. Transfer rates, on the other hand, have gone through the roof.

Which is why I would question the published tuning advice that
recommends lowering it to 2 for arrays. Arrays increase the effective
transfer rate more than they reduce random access times. Dropping from 4
to 2 would reflect going from a typical single 7200rpm ATA drive to a
15000rpm SCSI drive, but striping will move it back up again - probably
even higher than 4 with a big array (at a guess, perhaps the
relationship might be approximated as a square root after allowing for
the array type?).

With default settings, I've seen the planner pick the wrong index unless
random_page_cost was set to 2. But in testing on an ATA drive, I
achieved slightly better plan costings by increasing cpu_tuple_cost
(relative to cpu_index_tuple_cost - by default it's only a factor of 10)
and actually *raising* random_page_cost to 5! So why pick on one
parameter? It's all going to vary according to the query and the data.

I agree with Tom 100%. Pulling levers on a wonky model is no solution.

pgsql-performance by date:

Previous
From: Michael Fuhr
Date:
Subject: Re: Performance problem on delete from for 10k rows. May takes 20 minutes through JDBC interface
Next
From: Rod Taylor
Date:
Subject: Re: Performance problem on delete from for 10k rows.