Re: index prefetching - Mailing list pgsql-hackers

From Andres Freund
Subject Re: index prefetching
Date
Msg-id 4zeu5yb73byiquvf3eefsunnrydyqfxy3eup66jrliutrtd4xl@5iifjey4n5m5
Whole thread Raw
In response to Re: index prefetching  (Peter Geoghegan <pg@bowt.ie>)
Responses Re: index prefetching
Re: index prefetching
List pgsql-hackers
Hi,

On 2025-09-03 15:33:30 -0400, Peter Geoghegan wrote:
> On Wed, Sep 3, 2025 at 2:47 PM Andres Freund <andres@anarazel.de> wrote:
> > I still don't think I fully understand why the impact of this is so large. The
> > branch misses appear to be the only thing differentiating the two cases, but
> > with resowners neutralized, the remaining difference in branch misses seems
> > too large - it's not like the sequence of block numbers is more predictable
> > without prefetching...
> >
> > The main increase in branch misses is in index_scan_stream_read_next...
>
> I've been working on fixing the same regressed query, but using a
> completely different (though likely complementary) approach: by adding
> a test to index_scan_stream_read_next that detects when prefetching
> isn't favorable. If it isn't favorable, then we stop prefetching
> entirely (we fall back on regular sync I/O).

The issue to me is that this kind of query actually *can* substantially
benefit from prefetching, no? Afaict the performance without prefetching is
rather atrocious as soon as a) storage has a tad higher latency or b) DIO is
used.

Indeed: With DIO, readahead provides a ~2.6x improvement for the query at hand.


I continue to be worried that we're optimizing for queries that have no
real-world relevance. The regression afaict is contingent on

1) An access pattern that is unpredictable to the CPU (due to the use of
   random() as part of ORDER BY during the data generation)

2) Index and heap are somewhat correlated, but fuzzily, i.e. there are
   backward jumps in the heap block numbers being fetched

3) There are 1 - small_number tuples on one heap tables

4) The query scans a huge number of tuples, without actually doing any
   meaningful analysis on the tuples. As soon as one does meaningful work for
   returned tuples, the small difference in per-tuple CPU costs vanishes

5) The query visits all heap pages within a range, just not quite in
   order. Without that the kernel readahead would not work and the query's
   performance without readahead would be terrible even on low-latency storage


This just doesn't strike me as a particularly realistic combination of
factors?



I suspect we could more than eat back the loss in performance by doing batched
heap_hot_search_buffer()...


Greetings,

Andres Freund



pgsql-hackers by date:

Previous
From: Masahiko Sawada
Date:
Subject: Re: POC: enable logical decoding when wal_level = 'replica' without a server restart
Next
From: Peter Geoghegan
Date:
Subject: Re: index prefetching