Re: Limit memory usage by postgres_fdw batches - Mailing list pgsql-hackers

From Alexander Pyhalov
Subject Re: Limit memory usage by postgres_fdw batches
Date
Msg-id e39d964cb5ed91ede13a87109376a463@postgrespro.ru
Whole thread Raw
In response to Re: Limit memory usage by postgres_fdw batches  (Alexander Pyhalov <a.pyhalov@postgrespro.ru>)
Responses Re: Limit memory usage by postgres_fdw batches
List pgsql-hackers
Alexander Pyhalov писал(а) 2026-01-13 13:44:
> For now I start thinking we need some form of FETCH, which stops 
> fetching data based on batch size...

Hi.

To limit memory consumption, we actually have to retreive less data. And 
we can do it only on the side of the foreign server. I've rewritten the 
third patch. We introduce a new parameter - cursor_fetch_limit, which is 
set by postgres_fdw. When it is set, fetching limited count of records 
from the cursor is also limited by memory consumed by the records. Of 
course, record size is some estimation (for example, we don't know what 
out function will do).

This works as expected - in my tests with tables of large records, 
backends, executing selects, were always restricted by about 2 GB of RAM 
overall (without patch memory consumption easily grows up to 8 GB). 
However, now when we got less tuples from executor, than expected, we 
should recheck, if these are all tuples we can get. I've introduced 
es_eof EState field to signal that there's no more tuples. Don't know if 
it's the best way.

-- 
Best regards,
Alexander Pyhalov,
Postgres Professional
Attachment

pgsql-hackers by date:

Previous
From: Peter Smith
Date:
Subject: Re: DOCS - "\d mytable" also shows any publications that publish mytable
Next
From: VASUKI M
Date:
Subject: Re: Optional skipping of unchanged relations during ANALYZE?