Kirill,
cursor does not provide a way to limit the fetch size based on the memory consumption.
Imagine a table like (id int8, value jsonb).
If we use "fetch 1000", then it might require 1GiB on the client if every row contains 1MiB json.
If the client plays defensively and goes for "fetch 10", it might take a lot of time if jsons are small.
Neither cursor nor extended protocol solve the problem.
Hi!
Thank you for explaining this. I think you can propose your patch now, because I don't see any major show-stopper right now.
The only issue is that this would be a PostreSQL extension, which will impose extra maintenance pain to kernel hackers.
Also, note that we do not know individual row size in advance, because tuples attributes may be toasted. So, your query will return first time it tries to allocate more than $limit bytes, not before. Or, at least, straightforward implementation of this feature would.
Best regards,
Kirill Reshke