Thread: work_mem
PostgreSQL 12
Is there a query that will tell us at any given time what amount of PostgreSQL memory is being used for work_mem?
Thanks,
Under the Illinois Freedom of Information Act any written communication to or from university employees regarding university business is a public record and may be subject to public disclosure.
Attachment
On Wed, Mar 31, 2021 at 02:04:07PM +0000, Campbell, Lance wrote: > PostgreSQL 12 > > > > Is there a query that will tell us at any given time what amount of PostgreSQL > memory is being used for work_mem? Well, you can look at the process memory usage via 'ps', but I don't know a way to see current work_mem allocation. SHOW work_mem does show the current setting, but that isn't the allocated amount. -- Bruce Momjian <bruce@momjian.us> https://momjian.us EDB https://enterprisedb.com If only the physical world exists, free will is an illusion.
show work_mem;
show all;
How many times the amount of work_mem has been allocated currently cannot be shown this way. Any sort operation running currently allocates it. So it is not identical with the number of current queries, because queries can allocate it several times.
PostgreSQL 12
Is there a query that will tell us at any given time what amount of PostgreSQL memory is being used for work_mem?
Thanks,
Under the Illinois Freedom of Information Act any written communication to or from university employees regarding university business is a public record and may be subject to public disclosure.
--
Holger Jakobs, Bergisch Gladbach
+49 178 9759012
- sent from mobile, therefore short -
Please advise in detail it will help all PostgreSQL users.

You can show any or all settings with the command SHOW.
show work_mem;
show all;
How many times the amount of work_mem has been allocated currently cannot be shown this way. Any sort operation running currently allocates it. So it is not identical with the number of current queries, because queries can allocate it several times.Am 31. März 2021 16:04:07 MESZ schrieb "Campbell, Lance" <lance@illinois.edu>:PostgreSQL 12
Is there a query that will tell us at any given time what amount of PostgreSQL memory is being used for work_mem?
Thanks,
Under the Illinois Freedom of Information Act any written communication to or from university employees regarding university business is a public record and may be subject to public disclosure.
--
Holger Jakobs, Bergisch Gladbach
+49 178 9759012
- sent from mobile, therefore short -
Best Regards,
Sachin Kumar
SASIKUMAR Devaraj wrote on 4/2/2021 8:37 AM:
Hi AllAs soon as client session is established work_mem will be allocated or only when sort happens for that particular session? Please advise how the internal behavior? This will help me to configure my database memory with high connectionsRegardsSasiOn Wed, Mar 31, 2021 at 8:52 PM, Holger Jakobs<holger@jakobs.com> wrote:#yiv2643575322 #yiv2643575322 -- _filtered {} _filtered {} _filtered {} #yiv2643575322 #yiv2643575322 p.yiv2643575322MsoNormal, #yiv2643575322 li.yiv2643575322MsoNormal, #yiv2643575322 div.yiv2643575322MsoNormal {margin:0in;font-size:12.0pt;font-family:sans-serif;} #yiv2643575322 span.yiv2643575322EmailStyle17 {font-family:sans-serif;color:windowtext;} #yiv2643575322 .yiv2643575322MsoChpDefault {font-size:12.0pt;font-family:sans-serif;} _filtered {} #yiv2643575322 div.yiv2643575322WordSection1 {} #yiv2643575322 You can show any or all settings with the command SHOW.
show work_mem;
show all;
How many times the amount of work_mem has been allocated currently cannot be shown this way. Any sort operation running currently allocates it. So it is not identical with the number of current queries, because queries can allocate it several times.Am 31. März 2021 16:04:07 MESZ schrieb "Campbell, Lance" <lance@illinois.edu>:PostgreSQL 12
Is there a query that will tell us at any given time what amount of PostgreSQL memory is being used for work_mem?
Thanks,
Under the Illinois Freedom of Information Act any written communication to or from university employees regarding university business is a public record and may be subject to public disclosure.
--
Holger Jakobs, Bergisch Gladbach
+49 178 9759012
- sent from mobile, therefore short -
On Fri, Apr 2, 2021 at 6:09 PM, MichaelDBA<MichaelDBA@sqlexec.com> wrote:Memory is allocated dynamically per internal work_mem buffer requests.
SASIKUMAR Devaraj wrote on 4/2/2021 8:37 AM:Hi AllAs soon as client session is established work_mem will be allocated or only when sort happens for that particular session? Please advise how the internal behavior? This will help me to configure my database memory with high connectionsRegardsSasiOn Wed, Mar 31, 2021 at 8:52 PM, Holger Jakobs<holger@jakobs.com> wrote:#yiv8729720470 -- filtered {} #yiv8729720470 filtered {} #yiv8729720470 filtered {} #yiv8729720470 p.yiv8729720470MsoNormal, #yiv8729720470 li.yiv8729720470MsoNormal, #yiv8729720470 div.yiv8729720470MsoNormal {margin:0in;font-size:12.0pt;font-family:sans-serif;} #yiv8729720470 span.yiv8729720470EmailStyle17 {font-family:sans-serif;color:windowtext;} #yiv8729720470 .yiv8729720470MsoChpDefault {font-size:12.0pt;font-family:sans-serif;} #yiv8729720470 filtered {} #yiv8729720470 div.yiv8729720470WordSection1 {} #yiv8729720470 You can show any or all settings with the command SHOW.
show work_mem;
show all;
How many times the amount of work_mem has been allocated currently cannot be shown this way. Any sort operation running currently allocates it. So it is not identical with the number of current queries, because queries can allocate it several times.Am 31. März 2021 16:04:07 MESZ schrieb "Campbell, Lance" <lance@illinois.edu>:PostgreSQL 12
Is there a query that will tell us at any given time what amount of PostgreSQL memory is being used for work_mem?
Thanks,
Under the Illinois Freedom of Information Act any written communication to or from university employees regarding university business is a public record and may be subject to public disclosure.
--
Holger Jakobs, Bergisch Gladbach
+49 178 9759012
- sent from mobile, therefore short -
Right from the official docs:
work_mem sets the maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files. If this value is specified without units, it is taken as kilobytes... Note that for a complex query, several sort or hash operations might be running in parallel; each operation will be allowed to use as much memory as this value specifies before it starts to write data into temporary files. Also, several running sessions could be doing such operations concurrently. Therefore, the total memory used could be many times the value of
work_mem
; it is necessary to keep this fact in mind when choosing the value. Sort operations are used for ORDER BY
, DISTINCT
, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and hash-based processing of IN
subqueries.Regards,
Michael Vitale
SASIKUMAR Devaraj wrote on 4/2/2021 8:45 AM:
Thanks MichaelFor example work_mem is 4M and if I had 300 connections connected to dB the total memory requirement is 1.2 Gb.So as per my understanding this 1.2 gb is not allocated as soon as 300 connections established, but it may vary from 0 to 1.2gb as per operations from client. Please confirmRegardsSasiOn Fri, Apr 2, 2021 at 6:09 PM, MichaelDBA<MichaelDBA@sqlexec.com> wrote:Memory is allocated dynamically per internal work_mem buffer requests.
SASIKUMAR Devaraj wrote on 4/2/2021 8:37 AM:Hi AllAs soon as client session is established work_mem will be allocated or only when sort happens for that particular session? Please advise how the internal behavior? This will help me to configure my database memory with high connectionsRegardsSasiOn Wed, Mar 31, 2021 at 8:52 PM, Holger Jakobs<holger@jakobs.com> wrote:#yiv8729720470 -- filtered {} #yiv8729720470 filtered {} #yiv8729720470 filtered {} #yiv8729720470 p.yiv8729720470MsoNormal, #yiv8729720470 li.yiv8729720470MsoNormal, #yiv8729720470 div.yiv8729720470MsoNormal {margin:0in;font-size:12.0pt;font-family:sans-serif;} #yiv8729720470 span.yiv8729720470EmailStyle17 {font-family:sans-serif;color:windowtext;} #yiv8729720470 .yiv8729720470MsoChpDefault {font-size:12.0pt;font-family:sans-serif;} #yiv8729720470 filtered {} #yiv8729720470 div.yiv8729720470WordSection1 {} #yiv8729720470 You can show any or all settings with the command SHOW.
show work_mem;
show all;
How many times the amount of work_mem has been allocated currently cannot be shown this way. Any sort operation running currently allocates it. So it is not identical with the number of current queries, because queries can allocate it several times.Am 31. März 2021 16:04:07 MESZ schrieb "Campbell, Lance" <lance@illinois.edu>:PostgreSQL 12
Is there a query that will tell us at any given time what amount of PostgreSQL memory is being used for work_mem?
Thanks,
Under the Illinois Freedom of Information Act any written communication to or from university employees regarding university business is a public record and may be subject to public disclosure.
--
Holger Jakobs, Bergisch Gladbach
+49 178 9759012
- sent from mobile, therefore short -
It feels like there needs to be work_mem and work_mem_stack_size. When work memory is needed a process “pops” a token off of a stack. When it is done processing it “puts” the token back on the stack. If the stack is empty then don’t allocate memory just write to disk for work_mem.
This does two key things:
1) It allows for a real world understanding of how much memory is really needed on a day to day basis. You can track how often a stack is empty. You can also look at the number of temp files to see when work exceeds the work_mem allocation. There is no “art” to setting these values. You can use logical analysis to make choices.
2) This also prevents out of memory issues. You are protecting yourself from extreme loads.
Lance
From: MichaelDBA <MichaelDBA@sqlexec.com>
Date: Friday, April 2, 2021 at 7:50 AM
To: SASIKUMAR Devaraj <sashikumard@yahoo.com>
Cc: holger@jakobs.com <holger@jakobs.com>, pgsql-admin@lists.postgresql.org <pgsql-admin@lists.postgresql.org>
Subject: Re: work_mem
That is a common misconception. It is not one work_mem buffer per SQL, but one work_mem buffer per required operation within that SQL. So you can have manner work_mem buffers per SQL statement!
Right from the official docs:
work_mem sets the maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files. If this value is specified without units, it is taken as kilobytes... Note that for a complex query, several sort or hash operations might be running in parallel; each operation will be allowed to use as much memory as this value specifies before it starts to write data into temporary files. Also, several running sessions could be doing such operations concurrently. Therefore, the total memory used could be many times the value of work_mem
; it is necessary to keep this fact in mind when choosing the value. Sort operations are used for ORDER BY
, DISTINCT
, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and hash-based processing of IN
subqueries.
Regards,
Michael Vitale
SASIKUMAR Devaraj wrote on 4/2/2021 8:45 AM:
Thanks Michael
For example work_mem is 4M and if I had 300 connections connected to dB the total memory requirement is 1.2 Gb.
So as per my understanding this 1.2 gb is not allocated as soon as 300 connections established, but it may vary from 0 to 1.2gb as per operations from client. Please confirm
Regards
Sasi
On Fri, Apr 2, 2021 at 6:09 PM, MichaelDBA
<MichaelDBA@sqlexec.com> wrote:
Memory is allocated dynamically per internal work_mem buffer requests.
SASIKUMAR Devaraj wrote on 4/2/2021 8:37 AM:Hi All
As soon as client session is established work_mem will be allocated or only when sort happens for that particular session? Please advise how the internal behavior? This will help me to configure my database memory with high connections
Regards
Sasi
On Wed, Mar 31, 2021 at 8:52 PM, Holger Jakobs
<holger@jakobs.com> wrote:
You can show any or all settings with the command SHOW.
show work_mem;
show all;
How many times the amount of work_mem has been allocated currently cannot be shown this way. Any sort operation running currently allocates it. So it is not identical with the number of current queries, because queries can allocate it several times.Am 31. März 2021 16:04:07 MESZ schrieb "Campbell, Lance" <lance@illinois.edu>:
PostgreSQL 12
Is there a query that will tell us at any given time what amount of PostgreSQL memory is being used for work_mem?
Thanks,
Error! Filename not specified.
Under the Illinois Freedom of Information Act any written communication to or from university employees regarding university business is a public record and may be subject to public disclosure.
--
Holger Jakobs, Bergisch Gladbach
+49 178 9759012
- sent from mobile, therefore short -
On Fri, 2021-04-02 at 13:31 +0000, Campbell, Lance wrote: > It feels like there needs to be work_mem and work_mem_stack_size. When work memory is > needed a process “pops” a token off of a stack. When it is done processing it “puts” > the token back on the stack. If the stack is empty then don’t allocate memory just > write to disk for work_mem. > > This does two key things: > > 1) It allows for a real world understanding of how much memory is really needed on a > day to day basis. You can track how often a stack is empty. You can also look at the > number of temp files to see when work exceeds the work_mem allocation. There is no > “art” to setting these values. You can use logical analysis to make choices. > > 2) This also prevents out of memory issues. You are protecting yourself from extreme loads. If I get you right, you want another memory limit per session. I see the point, but then we wouldn't need "work_mem" any more, right? What is the point of limiting the memory per plan node if we have an overall limit? In practice, I have never had trouble with "work_mem". I usually follow my rule of thumb: max_connections * work_mem + shared_buffers < RAM While some backend may need more, many will need less. Only bitmaps, hashes and sorts are memory hungry. Yours, Laurenz Albe -- Cybertec | https://www.cybertec-postgresql.com
On Fri, Apr 2, 2021 at 04:59:16PM +0200, Laurenz Albe wrote: > On Fri, 2021-04-02 at 13:31 +0000, Campbell, Lance wrote: > > It feels like there needs to be work_mem and work_mem_stack_size. When work memory is > > needed a process “pops” a token off of a stack. When it is done processing it “puts” > > the token back on the stack. If the stack is empty then don’t allocate memory just > > write to disk for work_mem. > > > > This does two key things: > > > > 1) It allows for a real world understanding of how much memory is really needed on a > > day to day basis. You can track how often a stack is empty. You can also look at the > > number of temp files to see when work exceeds the work_mem allocation. There is no > > “art” to setting these values. You can use logical analysis to make choices. > > > > 2) This also prevents out of memory issues. You are protecting yourself from extreme loads. > > If I get you right, you want another memory limit per session. > > I see the point, but then we wouldn't need "work_mem" any more, right? > What is the point of limiting the memory per plan node if we have an > overall limit? > > In practice, I have never had trouble with "work_mem". I usually follow > my rule of thumb: max_connections * work_mem + shared_buffers < RAM > > While some backend may need more, many will need less. Only bitmaps, hashes > and sorts are memory hungry. This blog entry discusses how work_mem might be improved: https://momjian.us/main/blogs/pgblog/2018.html#December_10_2018 -- Bruce Momjian <bruce@momjian.us> https://momjian.us EDB https://enterprisedb.com If only the physical world exists, free will is an illusion.
Thanks for sharing this thread. My suggestion of having a work_mem_stack_size is the same concept mentioned in this thread regarding having a work_mem_pool. I prefer this later term rather than the one I was using. When the work mem pool is exhausted PostgreSQL just uses temp files for work_mem. With current statics for temp files and with a new stats on a work mem pool usage a user could fine tune memory much more precisely. It would leave the “art of memory tuning” behind. The other added benefit is that people would have a better understanding of how work_mem is used by naturally having to explain what a work_mem_pool is and when it is drawn on. There are probably a lot of PostgreSQL instance that would run faster just by having the confidence to increase the size of work_mem. I am sure many instances have this value set to low.
Lance
From: Bruce Momjian <bruce@momjian.us>
Date: Friday, April 2, 2021 at 10:07 AM
To: Laurenz Albe <laurenz.albe@cybertec.at>
Cc: Campbell, Lance <lance@illinois.edu>, MichaelDBA <MichaelDBA@sqlexec.com>, SASIKUMAR Devaraj <sashikumard@yahoo.com>, holger@jakobs.com <holger@jakobs.com>, pgsql-admin@lists.postgresql.org <pgsql-admin@lists.postgresql.org>
Subject: Re: work_mem
On Fri, Apr 2, 2021 at 04:59:16PM +0200, Laurenz Albe wrote:
> On Fri, 2021-04-02 at 13:31 +0000, Campbell, Lance wrote:
> > It feels like there needs to be work_mem and work_mem_stack_size. When work memory is
> > needed a process “pops” a token off of a stack. When it is done processing it “puts”
> > the token back on the stack. If the stack is empty then don’t allocate memory just
> > write to disk for work_mem.
> >
> > This does two key things:
> >
> > 1) It allows for a real world understanding of how much memory is really needed on a
> > day to day basis. You can track how often a stack is empty. You can also look at the
> > number of temp files to see when work exceeds the work_mem allocation. There is no
> > “art” to setting these values. You can use logical analysis to make choices.
> >
> > 2) This also prevents out of memory issues. You are protecting yourself from extreme loads.
>
> If I get you right, you want another memory limit per session.
>
> I see the point, but then we wouldn't need "work_mem" any more, right?
> What is the point of limiting the memory per plan node if we have an
> overall limit?
>
> In practice, I have never had trouble with "work_mem". I usually follow
> my rule of thumb: max_connections * work_mem + shared_buffers < RAM
>
> While some backend may need more, many will need less. Only bitmaps, hashes
> and sorts are memory hungry.
This blog entry discusses how work_mem might be improved:
https://urldefense.com/v3/__https://momjian.us/main/blogs/pgblog/2018.html*December_10_2018__;Iw!!DZ3fjg!ubflo-s4huK2u6qJsCnFu_At1slzkmzzjnkK5vqMOMS3pkRXihedv5CfnmxRENHV$
--
Bruce Momjian <bruce@momjian.us> https://urldefense.com/v3/__https://momjian.us__;!!DZ3fjg!ubflo-s4huK2u6qJsCnFu_At1slzkmzzjnkK5vqMOMS3pkRXihedv5CfnvMOvGsA$
EDB https://urldefense.com/v3/__https://enterprisedb.com__;!!DZ3fjg!ubflo-s4huK2u6qJsCnFu_At1slzkmzzjnkK5vqMOMS3pkRXihedv5CfnnqC21IT$
If only the physical world exists, free will is an illusion.
On Fri, Apr 2, 2021 at 03:25:04PM +0000, Campbell, Lance wrote: > Thanks for sharing this thread. My suggestion of having a work_mem_stack_size > is the same concept mentioned in this thread regarding having a work_mem_pool. > I prefer this later term rather than the one I was using. When the work mem > pool is exhausted PostgreSQL just uses temp files for work_mem. With current > statics for temp files and with a new stats on a work mem pool usage a user > could fine tune memory much more precisely. It would leave the “art of memory > tuning” behind. The other added benefit is that people would have a better > understanding of how work_mem is used by naturally having to explain what a > work_mem_pool is and when it is drawn on. There are probably a lot of > PostgreSQL instance that would run faster just by having the confidence to > increase the size of work_mem. I am sure many instances have this value set to > low. Uh, did you read the blog before it, referenced in that blog entry: https://momjian.us/main/blogs/pgblog/2018.html#December_7_2018 Even if we have a pool, it is still complex to configure memory, but it might help. -- Bruce Momjian <bruce@momjian.us> https://momjian.us EDB https://enterprisedb.com If only the physical world exists, free will is an illusion.
That is a common misconception. It is not one work_mem buffer per SQL, but one work_mem buffer per required operation within that SQL. So you can have manner work_mem buffers per SQL statement!
Right from the official docs:
work_mem sets the maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files. If this value is specified without units, it is taken as kilobytes... Note that for a complex query, several sort or hash operations might be running in parallel; each operation will be allowed to use as much memory as this value specifies before it starts to write data into temporary files. Also, several running sessions could be doing such operations concurrently. Therefore, the total memory used could be many times the value ofwork_mem
; it is necessary to keep this fact in mind when choosing the value. Sort operations are used forORDER BY
,DISTINCT
, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and hash-based processing ofIN
subqueries.
Regards,
Michael Vitale
If multiple users are querying the same regions of the same tables, does each process have to read the same blocks, or can they share ("oh look, some other process has already read into memory some of the data blocks I need, so I'll just share those buffers")?
Angular momentum makes the world go 'round.
If multiple users are querying the same regions of the same tables, does each process have to read the same blocks, or can they share ("oh look, some other process has already read into memory some of the data blocks I need, so I'll just share those buffers")?
> Thanks for sharing this thread. My suggestion of having a work_mem_stack_size
> is the same concept mentioned in this thread regarding having a work_mem_pool.
> I prefer this later term rather than the one I was using. When the work mem
> pool is exhausted PostgreSQL just uses temp files for work_mem. With current
> statics for temp files and with a new stats on a work mem pool usage a user
> could fine tune memory much more precisely. It would leave the “art of memory
> tuning” behind. The other added benefit is that people would have a better
> understanding of how work_mem is used by naturally having to explain what a
> work_mem_pool is and when it is drawn on. There are probably a lot of
> PostgreSQL instance that would run faster just by having the confidence to
> increase the size of work_mem. I am sure many instances have this value set to
> low.
Uh, did you read the blog before it, referenced in that blog entry:
https://momjian.us/main/blogs/pgblog/2018.html#December_7_2018
Even if we have a pool, it is still complex to configure memory, but it
might help.
--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EDB https://enterprisedb.com
If only the physical world exists, free will is an illusion.
On Friday, April 2, 2021, SASIKUMAR Devaraj <sashikumard@yahoo.com> wrote:
Case 1:work_mem=4mTotal Sessions connected=100Total=400MOr Case 2work_mem=4mTotal Sort operations at DB now=50Total work_mem=200MPlease clarify at high level whether Case 1 is true or Case 2 is true?
On Friday, April 2, 2021, SASIKUMAR Devaraj <sashikumard@yahoo.com> wrote:
Case 1:work_mem=4mTotal Sessions connected=100Total=400MOr Case 2work_mem=4mTotal Sort operations at DB now=50Total work_mem=200MPlease clarify at high level whether Case 1 is true or Case 2 is true?