Re: Add the ability to limit the amount of memory that can be allocated to backends. - Mailing list pgsql-hackers

From James Hunter
Subject Re: Add the ability to limit the amount of memory that can be allocated to backends.
Date
Msg-id CAJVSvF7pJMLwQNFcG2dcY0ZgUb_5pVxp6+bkzy3Bid_4vRDong@mail.gmail.com
Whole thread Raw
In response to Re: Add the ability to limit the amount of memory that can be allocated to backends.  (Jim Nasby <jnasby@upgrade.com>)
List pgsql-hackers
On Mon, Jan 6, 2025 at 1:07 PM Jim Nasby <jnasby@upgrade.com> wrote:
>
> I’ve been saying “workload management” for lack of a better term, but my initial suggestion upthread was to simply
stopallowing new transactions to start if global work_mem consumption exceeded some threshold. That’s simplistic enough
thatI wouldn’t really consider it “workload management”. Maybe “deferred execution” would be a better name. The only
otherthing it’d need is a timeout on how long a new transaction could sit in limbo. 

Yes, this seems like a good thing to do, but we need to handle
"work_mem" , by itself, first.

The problem is that it's just too easy for a query to blow up work_mem
consumption, almost instantaneously. By the time we notice that we're
low on working memory, pausing new transactions may not be sufficient.
We could already be in the middle of a giant Hash Join, for example,
and the hash table is just going to continue to grow...

Before we can solve the problem you describe, we need to be able to
limit the work_mem consumption by an in-progress query.

James



pgsql-hackers by date:

Previous
From: Jeff Davis
Date:
Subject: Re: Add CASEFOLD() function.
Next
From: Robert Treat
Date:
Subject: Re: New GUC autovacuum_max_threshold ?