Re: Add the ability to limit the amount of memory that can be allocated to backends. - Mailing list pgsql-hackers
From | Justin Pryzby |
---|---|
Subject | Re: Add the ability to limit the amount of memory that can be allocated to backends. |
Date | |
Msg-id | 20220831173457.GX31833@telsasoft.com Whole thread Raw |
In response to | Add the ability to limit the amount of memory that can be allocated to backends. (Reid Thompson <reid.thompson@crunchydata.com>) |
Responses |
Re: Add the ability to limit the amount of memory that can be allocated to backends.
|
List | pgsql-hackers |
On Wed, Aug 31, 2022 at 12:50:19PM -0400, Reid Thompson wrote: > Hi Hackers, > > Add the ability to limit the amount of memory that can be allocated to > backends. > > This builds on the work that adds backend memory allocated to > pg_stat_activity > https://www.postgresql.org/message-id/67bb5c15c0489cb499723b0340f16e10c22485ec.camel%40crunchydata.com > Both patches are attached. You should name the patches with different prefixes, like 001,002,003 Otherwise, cfbot may try to apply them in the wrong order. git format-patch is the usual tool for that. > + Specifies a limit to the amount of memory (MB) that may be allocated to MB are just the default unit, right ? The user should be allowed to write max_total_backend_memory='2GB' > + backends in total (i.e. this is not a per user or per backend limit). > + If unset, or set to 0 it is disabled. A backend request that would push > + the total over the limit will be denied with an out of memory error > + causing that backends current query/transaction to fail. Due to the dynamic backend's > + nature of memory allocations, this limit is not exact. If within 1.5MB of > + the limit and two backends request 1MB each at the same time both may be > + allocated exceeding the limit. Further requests will not be allocated until allocated, and exceed the limit > +bool > +exceeds_max_total_bkend_mem(uint64 allocation_request) > +{ > + bool result = false; > + > + if (MyAuxProcType != NotAnAuxProcess) > + return result; The double negative is confusing, so could use a comment. > + /* Convert max_total_bkend_mem to bytes for comparison */ > + if (max_total_bkend_mem && > + pgstat_get_all_backend_memory_allocated() + > + allocation_request > (uint64)max_total_bkend_mem * 1024 * 1024) > + { > + /* > + * Explicitely identify the OOM being a result of this > + * configuration parameter vs a system failure to allocate OOM. > + */ > + elog(WARNING, > + "request will exceed postgresql.conf defined max_total_backend_memory limit (%lu > %lu)", > + pgstat_get_all_backend_memory_allocated() + > + allocation_request, (uint64)max_total_bkend_mem * 1024 * 1024); I think it should be ereport() rather than elog(), which is internal-only, and not-translated. > + {"max_total_backend_memory", PGC_SIGHUP, RESOURCES_MEM, > + gettext_noop("Restrict total backend memory allocations to this max."), > + gettext_noop("0 turns this feature off."), > + GUC_UNIT_MB > + }, > + &max_total_bkend_mem, > + 0, 0, INT_MAX, > + NULL, NULL, NULL I think this needs a maximum like INT_MAX/1024/1024 > +uint64 > +pgstat_get_all_backend_memory_allocated(void) > +{ ... > + for (i = 1; i <= NumBackendStatSlots; i++) > + { It's looping over every backend for each allocation. Do you know if there's any performance impact of that ? I think it may be necessary to track the current allocation size in shared memory (with atomic increments?). Maybe decrements would need to be exactly accounted for, or otherwise Assert() that the value is not negative. I don't know how expensive it'd be to have conditionals for each decrement, but maybe the value would only be decremented at strategic times, like at transaction commit or backend shutdown. -- Justin
pgsql-hackers by date: