On Wed, Feb 19, 2025 at 8:29 AM Jakub Wartak
<jakub.wartak@enterprisedb.com> wrote:
>
> On Fri, Feb 14, 2025 at 7:16 PM Andres Freund <andres@anarazel.de> wrote:
>
> > Melanie has worked on this a fair bit, fwiw.
> >
> > My current thinking is that we'd want something very roughly like TCP
> > BBR. Basically, it predicts the currently available bandwidth not just via
> > lost packets - the traditional approach - but also by building a continually
> > updated model of "bytes in flight" and latency and uses that to predict what
> > the achievable bandwidth is.[..]
>
> Sadly that doesn't sound like PG18, right? (or I missed some thread,
> I've tried to watch Melanie's presentation though )
Yes, I spent about a year researching this. My final algorithm didn't
make it into a presentation, but the basic idea was to track how the
prefetch distance was affecting throughput on a per IO basis and push
the prefetch distance up when doing so was increasing throughput and
down when throughput isn't increasing. Doing this in a sawtooth
pattern ultimately would allow the prefetch distance to adapt to
changing system resources.
The actual algorithm was more complicated than this, but that was the
basic premise. It worked well in simulations.
- Melanie