Hi,
On 2026-04-05 11:42:19 -0400, Melanie Plageman wrote:
> On Fri, Apr 3, 2026 at 1:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:
> >
> > I've come across an interesting failure produced starting from 378a21618:
> > when using a build made with CFLAGS="-DRELCACHE_FORCE_RELEASE" and
> > echo "io_method = sync" >/tmp/temp.config, the test run:
> > TEMP_CONFIG=/tmp/temp.config TESTS=temp make -s check-tests
> >
> > fails as below:
> > --- .../src/test/regress/expected/temp.out 2026-02-13 06:15:55.887368624 +0200
> > +++ .../src/test/regress/results/temp.out 2026-04-03 07:51:36.735504156 +0300
> > @@ -493,11 +493,7 @@
> >
> > -- Check that read streams deal with lower number of pins available
> > SELECT count(*), max(a) max_a, min(a) min_a, max(cnt) max_cnt FROM test_temp;
> > - count | max_a | min_a | max_cnt
> > --------+-------+-------+---------
> > - 10000 | 10000 | 1 | 0
> > -(1 row)
> > -
> > +ERROR: no empty local buffer available
> > ROLLBACK;
>
> It has to do with the query needing an additional pin for the VM
> during on-access pruning and the read stream reading ahead until there
> is only one remaining buffer pin in the local pin limit (the cursor
> above is already consuming much of the backend local pin limit). We
> could perhaps fix this test by decreasing the pages in the relation or
> increasing the backend local pin limit, but I wonder if we need to do
> something more invasive to ensure that we can pin at least two
> buffers.
I think we should probably just have GetLocalPinLimit() return something
considerably smaller than num_temp_buffers, e.g. num_temp_buffers / 4 or
so.
There always may be more than one scan going on, so we can't ever promise that
there's at least a certain number of pins available. The main goal of the
shared pin limit is to prevent one backend from pinning disproportionally much
of s_b. And for that eventually scaling down to just needing 1-2 pins per
scan is sufficient.
Greetings,
Andres Freund