Re: [GENERAL] Running out of memory the hard way ... - Mailing list pgsql-general

From Albe Laurenz
Subject Re: [GENERAL] Running out of memory the hard way ...
Date
Msg-id A737B7A37273E048B164557ADEF4A58B539E65BC@ntex2010a.host.magwien.gv.at
Whole thread Raw
In response to [GENERAL] Running out of memory the hard way ...  (Bill Moran <wmoran@potentialtech.com>)
Responses Re: [GENERAL] Running out of memory the hard way ...  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
Bill Moran wrote:
> If you run a transaction with lots of server side functions that use a
> lot of memory, this can trigger the OOM killer in Linux, causing the
> PosgreSQL backend to receive a SIGKILL and all the associated bad
> stuff.
> 
> Tuning the OOM killer is not sufficient. No setting I've found for the
> OOM killer will guarantee that it won't SIGKILL a process that's essentially
> untenable anyway (because it's going to use more memory than actually
> exists on the system at some point anyway). Additionally, "add more RAM"
> doesn't "solve" the problem, it only delays it until datasets
> scale up to even larger transactions that use even more memory.

I routinely set vm.overcommit_memory = 2 and configure vm.overcommit_ratio
so that the kernel does not try to commit more memory than there is in
the machine.

That should get rid of the problem, of course at the risk of leaving
some memory unused.

> This is particularly prevelent with Postgis, because some Postgis functions
> are very memory intesive, but I'd be willing to bet real money that I could
> trigger it with just about any stored procedure that allocates memory in
> such as way that it doesn't get reclaimed until the transaction completes.
[...]
> 
> What I feel is the best way to mitigate the situation, is to have some
> setting that limits the maximum RAM any backend can consume. Attempting to
> exceed this limit would cause an error and rollback for that particular
> backend without affecting other backends.
[...]
>
> My first question: does this setting exist somewhere and I'm simply not
> finding it for some reason?
> 
> Assuming this doesn't exist (I haven't found it) my next question is
> whether there's a philosophical or technical reason that such a feature
> doesn't exist? Should I take this discussion to -hackers?

I don't think that there is such a setting.

work_mem sets a limit per operation, but that is a soft limit that
PostgreSQL server code can choose to ignore if it pleases.
Moreover, it does not limit the *total* memory a backend can use.

I'd delegate that problem to the operating system which, after all,
should know best of all how much memory a process uses.
And I don't see a big advantage in a PostgreSQL generated error message
over an "out of memory" error that is propagated from the operating system.

Of course, if there is no way to limit the amount of memory per process
(excluding shared memory!), you have a point.

Yours,
Laurenz Albe

pgsql-general by date:

Previous
From: Bill Moran
Date:
Subject: [GENERAL] Running out of memory the hard way ...
Next
From: Nikolai Zhubr
Date:
Subject: Re: [GENERAL] Causeless CPU load waves in backend, on windows, 9.5.5(EDB binary).