Re: index prefetching - Mailing list pgsql-hackers

From Andres Freund
Subject Re: index prefetching
Date
Msg-id 6wyxbnry2unm3kbcu2sabhzhs7baoedlg77xqm42chpofjq45g@igst42zpl7ok
Whole thread Raw
In response to Re: index prefetching  ("Peter Geoghegan" <pg@bowt.ie>)
Responses Re: index prefetching
List pgsql-hackers
Hi,

On 2025-08-14 14:44:44 -0400, Peter Geoghegan wrote:
> On Thu Aug 14, 2025 at 1:57 PM EDT, Peter Geoghegan wrote:
> > The only interesting thing about the flame graph is just how little
> > difference there seems to be (at least for this particular perf event
> > type).
>
> I captured method_io_uring.c DEBUG output from running each query in the
> server log, in the hope that it would shed some light on what's really going
> on here.  I think that it just might.
>
> I count a total of 12,401 distinct sleeps for the sequential/slow backwards
> scan test case:
>
> $ grep -E "wait_one with [1-9][0-9]* sleeps" sequential.txt | head
>  2025-08-14 14:35:03.278 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG:  00000: wait_one with 1 sleeps
>  2025-08-14 14:35:03.278 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG:  00000: wait_one with 1 sleeps
>  2025-08-14 14:35:03.278 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG:  00000: wait_one with 1 sleeps
>  2025-08-14 14:35:03.278 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG:  00000: wait_one with 1 sleeps
>  2025-08-14 14:35:03.278 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG:  00000: wait_one with 1 sleeps
>  2025-08-14 14:35:03.278 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG:  00000: wait_one with 1 sleeps
>  2025-08-14 14:35:03.279 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG:  00000: wait_one with 1 sleeps
>  2025-08-14 14:35:03.279 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG:  00000: wait_one with 1 sleeps
>  2025-08-14 14:35:03.279 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG:  00000: wait_one with 1 sleeps
>  2025-08-14 14:35:03.279 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG:  00000: wait_one with 1 sleeps
> $ grep -E "wait_one with [1-9][0-9]* sleeps" sequential.txt | awk '{ total += $11 } END { print total }'
> 12401
>
> But there are only 3 such sleeps seen when the random backwards scan query is
> run -- which might begin to explain the mystery of why it runs so much faster:
>
> $ grep -E "wait_one with [1-9][0-9]* sleeps" random.txt | awk '{ total += $11 } END { print total }'
> 104

I think this is just an indicator of being IO bound. That message is output
whenever we have to wait for IO to finish. So if one workload prints that a
12k times and another 104 times, that's because the latter didn't have to wait
for IO to complete, because it already had completed by the time we needed the
IO to have finished to continue.


Factors potentially leading to slower IO:

- sometimes random IO *can* be faster for SSDs, because it allows different
  flash chips to work concurrently, rather than being bound by the speed of
  one one flash chip

- it's possible that with your SSD the sequential IO leads to more IO
  combining. Larger IOs always have a higher latency than smaller IOs - but
  obviously fewer IOs are needed. The increased latency may be bad enough for
  your access pattern to trigger more waits.

  It's *not* necessarily enough to just lower io_combine_limit, the OS also
  can do combining.

  I'd see what changes if you temporarily reduce
  /sys/block/nvme6n1/queue/max_sectors_kb to a smaller size.


Could you show iostat for both cases?

Greetings,

Andres Freund



pgsql-hackers by date:

Previous
From: "Peter Geoghegan"
Date:
Subject: Re: index prefetching
Next
From: Peter Geoghegan
Date:
Subject: Re: index prefetching