Thread: postgresql 9.5 has ocuuered OOM
On 12/20/2017 04:08 PM, mark wrote: > I have set shared_buffers is 1/4 of memory. > work_mem is 2% of memory. > max_connections is 50. That means if you have all 50 connections active, they may easily consume 100% of memory, because 50 * 2 is 100. It's even easier if the connections are executing complex queries, because each query may use multiple work_mem buffers. So 2% seems a bit too high. > momery size is 16GB. > postgresql process used over 70% of memory and occuered OOM. So, did a single process use 70% of memory, or all postgres processes combined? If just a single process, it might be a poor plan choice (e.g. hash aggregate may easily cause that). If all processes combined, then perhaps it's due to work_mem being too high. > what should I do to deal with this problem? > Hard to say, until you provide enough information. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Am 20.12.2017 um 16:08 schrieb mark: > postgresql process used over 70% of memory and occuered OOM. > what should I do to deal with this problem? https://www.postgresql.org/docs/current/static/kernel-resources.html 18.4.4. Linux Memory Overcommit Regards, Andreas -- 2ndQuadrant - The PostgreSQL Support Company. www.2ndQuadrant.com
On Wednesday, December 20, 2017 11:08:51 PM PST mark wrote:
I have set shared_buffers is 1/4 of memory. work_mem is 2% of memory. max_connections is 50. momery size is 16GB. postgresql process used over 70% of memory and occuered OOM. what should I do to deal with this problem?
IIRC work_mem is a PER CONNECTION setting, that means every connection can use 2% of your total memory. I'd try setting work_mem to something like 16MB or 32MB and see if the queries still execute properly. You want to set work_mem as high as needed to allow proper query execution but as low as possible to avoid running out of memory.
IIRC work_mem is a PER CONNECTION setting,
On Wednesday, December 20, 2017 9:59:24 AM PST David G. Johnston wrote:
On Wed, Dec 20, 2017 at 9:44 AM, Uwe <uwe@oss4u.com> wrote:
IIRC work_mem is a PER CONNECTION setting,
The docs for this setting clearly state that a single connection/session can itself use multiple times this values for a single query.
https://www.postgresql.org/docs/10/static/runtime-config-resource.html#RUNTI ME-CONFIG-RESOURCE-MEMORY
So even worse for the OP. No wonder he runs out of memory. Thanks for the correction.
Uwe
On Wed, Dec 20, 2017 at 9:25 AM, Andreas Kretschmer <andreas@a-kretschmer.de> wrote: > > > Am 20.12.2017 um 16:08 schrieb mark: >> >> postgresql process used over 70% of memory and occuered OOM. >> what should I do to deal with this problem? > > > https://www.postgresql.org/docs/current/static/kernel-resources.html > 18.4.4. Linux Memory Overcommit More specifically: https://www.postgresql.org/docs/current/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT To reiterate the basic message there, at no time should the OS think that killing big processes is OK. It is NOT. At no time should your OS be terminating big processes all on its own. Also it's far better to starve your work_mem and keep all the processes running than to ever run any out of memory. But if one does run out of memory it should only cause a problem for that one process, not the backend writer etc.
Am 20.12.2017 um 17:24 schrieb Tomas Vondra: > That means if you have all 50 connections active, they may easily > consume 100% of memory, because 50 * 2 is 100. It's even easier if the > connections are executing complex queries, because each query may use > multiple work_mem buffers. So 2% seems a bit too high. Nice hint! You are right. Regards, Andreas -- 2ndQuadrant - The PostgreSQL Support Company. www.2ndQuadrant.com
Hi Tomas,Uwe,David G
>combined?
all postgres processes use over 70% of memory.
>aggregate may easily cause that).
because delete clause used a lot of memory .
delete clause is below:
DELETE FROM test WHERE testid in (select r_id from test1 where p_id_id in ( select re_id from ap_en where link = $1))
delete from test where test1_id = $1 AND id = $2
because delete clause is using select condition. maybe It make memory useage high.
if I decrease the work_mem size,It will affect delete clause execution efficiency,
I want the session unit to set work_mem size.
the OS level (cgconfig) to set all postgres processes memory usage.
How about this setting ?
Regards,
Mark
At 2017-12-21 00:24:35, "Tomas Vondra" <tomas.vondra@2ndquadrant.com> wrote: >On 12/20/2017 04:08 PM, mark wrote: >> I have set shared_buffers is 1/4 of memory. >> work_mem is 2% of memory. >> max_connections is 50. > >That means if you have all 50 connections active, they may easily >consume 100% of memory, because 50 * 2 is 100. It's even easier if the >connections are executing complex queries, because each query may use >multiple work_mem buffers. So 2% seems a bit too high. > >> momery size is 16GB. >> postgresql process used over 70% of memory and occuered OOM. > >So, did a single process use 70% of memory, or all postgres processes >combined? > >If just a single process, it might be a poor plan choice (e.g. hash >aggregate may easily cause that). > >If all processes combined, then perhaps it's due to work_mem being too high. > >> what should I do to deal with this problem? >> > >Hard to say, until you provide enough information. > >regards > >-- >Tomas Vondra http://www.2ndQuadrant.com >PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services