On 09/09/2016 03:25 PM, Greg Stark wrote:
> On Fri, Sep 9, 2016 at 1:01 PM, Heikki Linnakangas <hlinnaka@iki.fi> wrote:
>> I'm happy with what it looks like. We are in fact getting a more sequential
>> access pattern with these patches, because we're not expanding the pre-read
>> tuples into SortTuples. Keeping densely-packed blocks in memory, instead of
>> SortTuples, allows caching more data overall.
>
>
> Wow, this is really cool. We should do something like this for query
> execution too.
>
> I still didn't follow exactly why removing the prefetching allows more
> sequential i/o. I thought the whole point of prefetching was to reduce
> the random i/o from switching tapes.
The first patch removed prefetching, but the second patch re-introduced
it, in a different form. The prefetching is now done in logtape.c, by
reading multiple pages at a time. The on-tape representation of tuples
is more compact than having them in memory as SortTuples, so you can fit
more data in memory overall, which makes the access pattern more sequential.
There's one difference between these approaches that I didn't point out
earlier: We used to prefetch tuples from each *run*, and stopped
pre-reading when we reached the end of the run. Now that we're doing the
prefetching as raw tape blocks, we don't stop at run boundaries. I don't
think that makes any big difference one way or another, but I thought
I'd mention it.
- Heikki