Re: BUG: ReadStream look-ahead exhausts local buffers when effective_io_concurrency>=64 - Mailing list pgsql-hackers

From Eduard Stepanov
Subject Re: BUG: ReadStream look-ahead exhausts local buffers when effective_io_concurrency>=64
Date
Msg-id CADOio5KHu990BKOf3tiZOBbd8U0sHLHVS+ARajbYY0Wg_dLfKw@mail.gmail.com
Whole thread Raw
In response to BUG: ReadStream look-ahead exhausts local buffers when effective_io_concurrency>=64  (Induja Sreekanthan <indujas@google.com>)
List pgsql-hackers
Hi Induja!

I've encountered the same issue but in a different context, so I came
up with an alternative solution: dynamic buffer_limit check in
read_stream_look_ahead()

In read_stream_look_ahead(), I dynamically check the actual remaining
pin budget via GetAdditionalLocalPinLimit(). This adds a third
condition to the main while loop and re-checks after each pin
operation. Additionally, when the budget is exhausted, the pending
read is started immediately rather than waiting for io_combine_limit
accumulation.

The attached patch includes changes to
src/backend/storage/aio/read_stream.c and adds two additional tests to
temp.sql.

Best regards,
Eduard Stepanov
Tantor Labs LLC




On Tue, 10 Mar 2026 at 23:22, Induja Sreekanthan <indujas@google.com> wrote:
>
> Hi,
>
> I found an issue where Postgres (with effective_io_concurrency of 64 or higher) runs out of local buffers during a
sequentialscan on a temporary table with TOAST data. 
>
> The issue occurs because the ReadStream look-ahead pins all the local buffers. This results in the TOAST index
look-upand TOAST page read being unable to find any available local buffers. The ReadStream's max_pinned_buffers can be
ashigh as the num_temp_buffers, depending on the effective_io_concurrency. 
>
> Here is a reproduction of the issue using the default temp_buffers setting and effective_io_concurrency=128:
>
> docker run --name my-postgres -e POSTGRES_PASSWORD=my-password -p 5432:5432 -d postgres:18 -c
effective_io_concurrency=128
>
> postgres=# CREATE TEMPORARY TABLE tmp_tbl1 (
>     s_suppkey    NUMERIC NOT NULL,
>     s_nationkey  NUMERIC,
>     s_comment    VARCHAR(256),
>     s_name       CHAR(256),
>     s_address    VARCHAR(256),
>     s_phone      TEXT,
>     s_acctbal    NUMERIC,
>     CONSTRAINT supplier_pk PRIMARY KEY (s_suppkey)
> );
> CREATE TABLE
> postgres=# INSERT INTO tmp_tbl1 (s_suppkey, s_nationkey, s_comment, s_name, s_address, s_phone, s_acctbal)
> SELECT
>     ('1' || repeat('0', 2000) || i::text)::NUMERIC AS s_suppkey,
>     ('5' || repeat('0', 2000) || floor(random() * 25)::text)::NUMERIC AS s_nationkey,
>     md5(random()::text) || ' some comment' AS s_comment,
>     'Supplier#' || LPAD(i::text, 9, '0') AS s_name,
>     'Address-' || md5(i::text) AS s_address,
>     repeat('P', 4096) || '-' || i::text || repeat('P', 2048) || 'fwoiefrr' ||
>     repeat('fejwfelwkmfP', 4096) || '-' || i::text || repeat('fnwekjfmelkwf', 2048) AS s_phone,
>     ('9' || repeat('9', 2000) || '.' || floor(random()*100)::text)::NUMERIC AS s_acctbal
> FROM generate_series(1, 8000) AS i;
> INSERT 0 8000
> postgres=# SELECT * FROM tmp_tbl1;
> ERROR:  no empty local buffer available
>
> Attached is a patch that addresses this by limiting ReadStream's max_pinned_buffers for temp tables to 75% of the
availablelocal buffers. It also introduces a cap on max_ios for temp tables to DEFAULT_EFFECTIVE_IO_CONCURRENCY, to
accountfor multiple sequential scan look-aheads happening simultaneously. 
>
> Regards,
> Induja Sreekanthan

Attachment

pgsql-hackers by date:

Previous
From: Etsuro Fujita
Date:
Subject: Re: Import Statistics in postgres_fdw before resorting to sampling.
Next
From: Ajin Cherian
Date:
Subject: Re: synchronized_standby_slots behavior inconsistent with quorum-based synchronous replication