Tom Lane wrote:
>Curt Sampson <cjs@cynic.net> writes:
>
>>Grabbing bigger chunks is always optimal, AFICT, if they're not
>>*too* big and you use the data. A single 64K read takes very little
>>longer than a single 8K read.
>>
>
>Proof?
>
I contend this statement.
It's optimal to a point. I know that my system settles into it's best
read-speeds @ 32K or 64K chunks. 8K chunks are far below optimal for my
system. Most systems I work on do far better at 16K than at 8K, and
most don't see any degradation when going to 32K chunks. (this is
across numerous OSes and configs -- results are interpretations from
bonnie disk i/o marks).
Depending on what you're doing it is more efficiend to read bigger
blocks up to a point. If you're multi-thread or reading in non-blocking
mode, take as big a chunk as you can handle or are ready to process in
quick order. If you're picking up a bunch of little chunks here and
there and know oyu're not using them again then choose a size that will
hopeuflly cause some of the reads to overlap, failing that, pick the
smallest usable read size.
The OS can never do that stuff for you.