Re: repeated out of shared memory error - not related to max_locks_per_transaction - Mailing list pgsql-admin

From MichaelDBA
Subject Re: repeated out of shared memory error - not related to max_locks_per_transaction
Date
Msg-id 5B51E362.1050700@sqlexec.com
Whole thread Raw
In response to Re: repeated out of shared memory error - not related tomax_locks_per_transaction  (Fabio Pardi <f.pardi@portavita.eu>)
Responses Re: repeated out of shared memory error - not related to max_locks_per_transaction  (MichaelDBA <MichaelDBA@sqlexec.com>)
Re: repeated out of shared memory error - not related tomax_locks_per_transaction  (Fabio Pardi <f.pardi@portavita.eu>)
List pgsql-admin
wrong again, Fabio.  PostgreSQL is not coded to manage memory usage in the way you think it does with work_mem.  Here is a quote from Citus about the dangers of setting work_mem too high.

When you consume more memory than is available on your machine you can start to see out of out of memory errors within your Postgres logs, or in worse cases the OOM killer can start to randomly kill running processes to free up memory. An out of memory error in Postgres simply errors on the query you’re running, where as the the OOM killer in linux begins killing running processes which in some cases might even include Postgres itself.

When you see an 
out of memory error you either want to increase the overall RAM on the machine itself by upgrading to a larger instance OR you want to decrease the amount of memory that work_mem uses. Yes, you read that right: out-of-memory it’s better to decrease work_mem instead of increase since that is the amount of memory that can be consumed by each process and too many operations are leveraging up to that much memory.


https://www.citusdata.com/blog/2018/06/12/configuring-work-mem-on-postgres/

Regards,
Michael Vitale
Friday, July 20, 2018 9:19 AM

Nope Michael,

if 'stuff' gets spilled to disk does not end up in an error. It will silently write a file to disk for the time being and then deleted it when your operation is finished.

period.

Based on your log settings, it might appear in the logs, under 'temporary file created..'.


regards,

fabio pardi



On 20/07/18 15:00, MichaelDBA wrote:

Friday, July 20, 2018 9:00 AM
I do not think that is true.  Stuff just gets spilled to disk when the work_mem buffers would exceed the work_mem constraint.  They are not constrained by what real memory is available, hence the memory error!  They will try to get memory even if it is not available as long as work_mem buffers threshold is not reached.

Regards,
Michael Vitale



Friday, July 20, 2018 8:47 AM

work_mem cannot be the cause of it for the simple reason that if the memory needed by your query overflows work_mem, it will spill to disk


regards,

fabio pardi



On 20/07/18 14:35, MichaelDBA wrote:

Friday, July 20, 2018 8:35 AM
Perhaps your "work_mem" setting is causing the memory problems.  Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale


Friday, July 20, 2018 8:32 AM
I would also lookup the definition of shared buffers and effective cache. If I remember correctly you can think of shared buffers as how much memory total PostgreSQL has to work with. Effective cache is how much memory is available for PostgreSQL to run, shared buffers, as well as an estimate of how much memory is available to the OS to cache files in memory. So effective cache should be equal to or larger than shared buffers. Effective cache is used to help with the SQL planning.

Double check the documentation.

Lance

Sent from my iPad



pgsql-admin by date:

Previous
From: Fabio Pardi
Date:
Subject: Re: repeated out of shared memory error - not related tomax_locks_per_transaction
Next
From: MichaelDBA
Date:
Subject: Re: repeated out of shared memory error - not related to max_locks_per_transaction