From d3d7d4d09d804a8a8d00ecea080de5d63d21159c Mon Sep 17 00:00:00 2001 From: Thomas Munro Date: Sun, 7 Apr 2024 12:36:44 +1200 Subject: [PATCH v13 1/2] Fix bug in read_stream.c. When we determine that a wanted block can't be combined with the current pending read, it's time to start that pending read to get it out of the way. An "if" in that code path should have been a "while", because it might take more than one go to get that job done. Otherwise the remaining part of a partially started read could be clobbered and we could lose some blocks. This was only broken for smaller ranges, as the more common case of io_combine_limit-sized ranges is handled earlier in the code and knows how to loop. Discovered while testing parallel sequential scans of partially cached tables. They have a ramp-down phase with ever smaller ranges of contiguous blocks, to be fair to parallel workers as the work runs out. Defect in commit b5a9b18c. --- src/backend/storage/aio/read_stream.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/storage/aio/read_stream.c b/src/backend/storage/aio/read_stream.c index 9a70a81f7ae..f54dacdd914 100644 --- a/src/backend/storage/aio/read_stream.c +++ b/src/backend/storage/aio/read_stream.c @@ -363,7 +363,7 @@ read_stream_look_ahead(ReadStream *stream, bool suppress_advice) } /* We have to start the pending read before we can build another. */ - if (stream->pending_read_nblocks > 0) + while (stream->pending_read_nblocks > 0) { read_stream_start_pending_read(stream, suppress_advice); suppress_advice = false; -- 2.40.1