Here is a more fleshed out version of this concept patch, now that we
have lots of streams wired up to try it with. Bitmap Heap Scan seemed
to be a good candidate.
postgres=# create table t (a int unique, b int unique);
CREATE TABLE
postgres=# insert into t select generate_series(1, 100000),
generate_series(1, 100000);
INSERT 0 100000
postgres=# select ctid from t where (a between 1 and 8000 or b between
1 and 8000) and ctid::text like '%,1)';
ctid
--------
(0,1)
(1,1)
(2,1)
... pages in order ...
(33,1)
(34,1)
(35,1)
(36 rows)
postgres=# select pg_buffercache_evict(bufferid) from pg_buffercache
where relfilenode = pg_relation_filenode('t'::regclass) and
relblocknumber % 3 != 2;
... some pages being kicked out to create a mixed hit/miss scan ...
postgres=# select ctid from t where (a between 1 and 8000 or b between
1 and 8000) and ctid::text like '%,1)';
ctid
--------
(0,1)
(1,1)
(2,1)
... order during early distance ramp-up ...
(12,1)
(20,1) <-- chaos reigns
(23,1)
(17,1)
(26,1)
(13,1)
(29,1)
...
(34,1)
(36 rows)
One weird issue, not just with reordering, is that read_stream.c's
distance cools off too easily with some simple test patterns. Start
at 1, double on miss, subtract one on hit, repeat, and you can be
trapped alternating between 1 and 2 when you'd certainly benefit from
IO concurrency and also reordering. It may need a longer memory.
That seemed like too artificial a problem to worry about for v18, but
it's why I used relblocknumber % 3 != 2 and not eg relblocknumber % 2
!= 1 above.