Hannu Krosing <hannu@skype.net> writes:
> Disks can read at full rotation speed, so skipping (not reading) some
> blocks will not make reading the remaining blocks from the same track
> faster. And if there are more than 20 8k pages per track, you still have
> a very high probablility you need to read all tracks..
Well, if there are exactly 20 you would expect a 50% chance of having to read
a track so you would expect double the effective bandwidth. It would have to
be substantially bigger to not have any noticeable effect.
> You may be able to move to the next track a little earlier compared to
> reading all blocks, but then you are likely to miss the block from next
> track and have to wait a full rotation.
No, I don't think that works out. You always have a chance of missing the
block from the next track and having to wait a full rotation and your chance
isn't increased or decreased by seeking earlier in the rotation. So you would
expect each track to take <20 block reads> less time on average.
> Your test program could have got a little better results, if you had
> somehow managed to tell the system all the block numbers to read in one
> go, not each time the next one after hetting the previous one.
I was trying to simulate the kind of read pattern that postgres generates
which I believe looks like that.
> The fact that 5% was not slower than seqscan seems to indicate that
> actually all track reads were cached inside the disk or controller.
I dunno, your first explanation seemed pretty convincing and doesn't depend on
specific assumptions about the caching. Moreover this doesn't explain why you
*do* get a speedup when reading less than 5%.
Perhaps what this indicates is that the real meat is in track sampling, not
block sampling.
--
greg