On 19/10/2023 02:00, Stephen Frost wrote:
> Greetings,
>
> * Andrei Lepikhov (a.lepikhov@postgrespro.ru) wrote:
>> On 29/9/2023 09:52, Andrei Lepikhov wrote:
>>> On 22/5/2023 22:59, reid.thompson@crunchydata.com wrote:
>>>> Attach patches updated to master.
>>>> Pulled from patch 2 back to patch 1 a change that was also pertinent
>>>> to patch 1.
>>> +1 to the idea, have doubts on the implementation.
>>>
>>> I have a question. I see the feature triggers ERROR on the exceeding of
>>> the memory limit. The superior PG_CATCH() section will handle the error.
>>> As I see, many such sections use memory allocations. What if some
>>> routine, like the CopyErrorData(), exceeds the limit, too? In this case,
>>> we could repeat the error until the top PG_CATCH(). Is this correct
>>> behaviour? Maybe to check in the exceeds_max_total_bkend_mem() for
>>> recursion and allow error handlers to slightly exceed this hard limit?
>
>> By the patch in attachment I try to show which sort of problems I'm worrying
>> about. In some PП_CATCH() sections we do CopyErrorData (allocate some
>> memory) before aborting the transaction. So, the allocation error can move
>> us out of this section before aborting. We await for soft ERROR message but
>> will face more hard consequences.
>
> While it's an interesting idea to consider making exceptions to the
> limit, and perhaps we'll do that (or have some kind of 'reserve' for
> such cases), this isn't really any different than today, is it? We
> might have a malloc() failure in the main path, end up in PG_CATCH() and
> then try to do a CopyErrorData() and have another malloc() failure.
>
> If we can rearrange the code to make this less likely to happen, by
> doing a bit more work to free() resources used in the main path before
> trying to do new allocations, then, sure, let's go ahead and do that,
> but that's independent from this effort.
I agree that rearranging efforts can be made independently. The code in
the letter above was shown just as a demo of the case I'm worried about.
IMO, the thing that should be implemented here is a recursion level for
the memory limit. If processing the error, we fall into recursion with
this limit - we should ignore it.
I imagine custom extensions that use PG_CATCH() and allocate some data
there. At least we can raise the level of error to FATAL.
--
regards,
Andrey Lepikhov
Postgres Professional