On Fri, Dec 09, 2005 at 10:37:25AM -0500, Bruce Momjian wrote:
> Kenneth Marshall wrote:
> > The main benefit of pre-fetching optimization is to allow just-
> > in-time data delivery to the processor. There are numerous papers
> > illustrating the dramatic increase in data throughput by using
> > datastructures designed to take advantage of prefetching. Factors
> > of 3-7 can be realized and this can greatly increase database
> > performance. The first step needed to take advantage of the ability
> > of pre-fetching to reduce memory latency is to design the index
> > page layout with an internal blocking of the cache-line size.
> > Then issue pre-fetch instructions for the memory you are going
> > to need to process the index page far enough in advance to allow
> > it to be in a cache-line by the time it is needed.
>
> I can see that being useful for a single-user application that doesn't
> have locking or I/O bottlenecks, and doesn't have a multi-stage design
> like a database. Do we do enough of such processing that we will _see_
> an improvement, or will our code become more complex and it will be
> harder to make algorithmic optimizations to our code?
>
We should certainly consider all of the trade-offs involved. But if
processing a single index page takes 1/5 the time or less, then the
DB can process 5x the lookups in the same amount of time. That would
be very nice in a multi-user DB.
Ken