Thread: work_mem and shared_buffers

work_mem and shared_buffers

From
"Campbell, Lance"
Date:

Does the amount of memory allocate to work_mem get subtracted from shared_buffers?

 

Example:

 

If work_mem is 1M and there are 10 connections and shared_buffers is 100M then would the total be 90 M left for shared_buffers?

 

Or does the amount of memory allocated for work_mem have nothing to do with shared_buffers?

 

Thanks,

 

Lance Campbell

Project Manager/Software Architect

Web Services at Public Affairs

University of Illinois

217.333.0382

http://webservices.uiuc.edu

 

Re: work_mem and shared_buffers

From
Heikki Linnakangas
Date:
Campbell, Lance wrote:
> Does the amount of memory allocate to work_mem get subtracted from
> shared_buffers?
>
> Example:
>
> If work_mem is 1M and there are 10 connections and shared_buffers is
> 100M then would the total be 90 M left for shared_buffers?
>
> Or does the amount of memory allocated for work_mem have nothing to do
> with shared_buffers?

No, they're completely separate.

Note that a connection can use more than work_mem of memory. For
example, if you run a query with multiple Sort or hash-nodes, each such
node allocates up to work_mem of memory.

--
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com

Re: work_mem and shared_buffers

From
"Campbell, Lance"
Date:
How do you know when you should up the value of work_mem?  Just play
with the number.  Is there a query I could do that would tell me if
PostgreSql is performing SQL that could use more memory for sorting?

Thanks,

Lance Campbell
Project Manager/Software Architect
Web Services at Public Affairs
University of Illinois
217.333.0382
http://webservices.uiuc.edu


-----Original Message-----
From: Heikki Linnakangas [mailto:hlinnaka@gmail.com] On Behalf Of Heikki
Linnakangas
Sent: Friday, November 09, 2007 11:57 AM
To: Campbell, Lance
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] work_mem and shared_buffers

Campbell, Lance wrote:
> Does the amount of memory allocate to work_mem get subtracted from
> shared_buffers?
>
> Example:
>
> If work_mem is 1M and there are 10 connections and shared_buffers is
> 100M then would the total be 90 M left for shared_buffers?
>
> Or does the amount of memory allocated for work_mem have nothing to do
> with shared_buffers?

No, they're completely separate.

Note that a connection can use more than work_mem of memory. For
example, if you run a query with multiple Sort or hash-nodes, each such
node allocates up to work_mem of memory.

--
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com

Re: work_mem and shared_buffers

From
Bill Moran
Date:
On Fri, 9 Nov 2007 12:08:57 -0600
"Campbell, Lance" <lance@uiuc.edu> wrote:

> How do you know when you should up the value of work_mem?  Just play
> with the number.  Is there a query I could do that would tell me if
> PostgreSql is performing SQL that could use more memory for sorting?

8.2 and older, it can be difficult to know, and I don't have a specific
recommendation.

8.3 includes a parameter to log the usage of temporary files by Postgres.
When a sort can't fit in the available memory, it uses a temp file, thus
you could use this new feature to track when sorts don't fit in
work_mem.

--
Bill Moran
Potential Technologies
http://www.potentialtech.com

Re: work_mem and shared_buffers

From
"Campbell, Lance"
Date:
Wow.  That is a nice logging feature in 8.3!

Thanks,

Lance Campbell
Project Manager/Software Architect
Web Services at Public Affairs
University of Illinois
217.333.0382
http://webservices.uiuc.edu

-----Original Message-----
From: Bill Moran [mailto:wmoran@collaborativefusion.com]
Sent: Friday, November 09, 2007 2:08 PM
To: Campbell, Lance
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] work_mem and shared_buffers

On Fri, 9 Nov 2007 12:08:57 -0600
"Campbell, Lance" <lance@uiuc.edu> wrote:

> How do you know when you should up the value of work_mem?  Just play
> with the number.  Is there a query I could do that would tell me if
> PostgreSql is performing SQL that could use more memory for sorting?

8.2 and older, it can be difficult to know, and I don't have a specific
recommendation.

8.3 includes a parameter to log the usage of temporary files by
Postgres.
When a sort can't fit in the available memory, it uses a temp file, thus
you could use this new feature to track when sorts don't fit in
work_mem.

--
Bill Moran
Potential Technologies
http://www.potentialtech.com

Re: work_mem and shared_buffers

From
"Scott Marlowe"
Date:
On Nov 9, 2007 12:08 PM, Campbell, Lance <lance@uiuc.edu> wrote:
> How do you know when you should up the value of work_mem?  Just play
> with the number.  Is there a query I could do that would tell me if
> PostgreSql is performing SQL that could use more memory for sorting?

Trial and error.  Note that you can set work_mem for a given session.
While it may seem that making work_mem bigger will always help, that's
not necessarily the case.

Using this query:

select count(*) from (select * from myreporttable where lasttime >
now() - interval '1 week' order by random() ) as l

I did the following: (I ran the query by itself once to fill the
buffers / cache of the machine with the data)

work_mem Time:
1000kB 29215.563 ms
4000kB 20612.489 ms
8000kB 18408.087 ms
16000kB 16893.964 ms
32000kB 17681.221 ms
64000kB 22439.988 ms
125MB 23398.891 ms
250MB 25461.797 ms

Note that my best time was at around 16 Meg work_mem.  This data set
is MUCH bigger than 16 Meg, it's around 300-400 Meg.  But work_mem
optimized out at 16 Meg.  Btw, I tried it going as high as 768 Meg,
and it was still slower than 16M.

This machine has 2 Gigs ram and is optimized for IO not CPU performance.

Re: work_mem and shared_buffers

From
"Campbell, Lance"
Date:
It is amazing, how after working with databases very actively for over 8
years, I am still learning things.

Thanks,

Lance Campbell
Project Manager/Software Architect
Web Services at Public Affairs
University of Illinois
217.333.0382
http://webservices.uiuc.edu

-----Original Message-----
From: Scott Marlowe [mailto:scott.marlowe@gmail.com]
Sent: Friday, November 09, 2007 1:13 PM
To: Campbell, Lance
Cc: Heikki Linnakangas; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] work_mem and shared_buffers

On Nov 9, 2007 12:08 PM, Campbell, Lance <lance@uiuc.edu> wrote:
> How do you know when you should up the value of work_mem?  Just play
> with the number.  Is there a query I could do that would tell me if
> PostgreSql is performing SQL that could use more memory for sorting?

Trial and error.  Note that you can set work_mem for a given session.
While it may seem that making work_mem bigger will always help, that's
not necessarily the case.

Using this query:

select count(*) from (select * from myreporttable where lasttime >
now() - interval '1 week' order by random() ) as l

I did the following: (I ran the query by itself once to fill the
buffers / cache of the machine with the data)

work_mem Time:
1000kB 29215.563 ms
4000kB 20612.489 ms
8000kB 18408.087 ms
16000kB 16893.964 ms
32000kB 17681.221 ms
64000kB 22439.988 ms
125MB 23398.891 ms
250MB 25461.797 ms

Note that my best time was at around 16 Meg work_mem.  This data set
is MUCH bigger than 16 Meg, it's around 300-400 Meg.  But work_mem
optimized out at 16 Meg.  Btw, I tried it going as high as 768 Meg,
and it was still slower than 16M.

This machine has 2 Gigs ram and is optimized for IO not CPU performance.

Re: work_mem and shared_buffers

From
"Scott Marlowe"
Date:
On Nov 9, 2007 1:19 PM, Campbell, Lance <lance@uiuc.edu> wrote:
> It is amazing, how after working with databases very actively for over 8
> years, I am still learning things.

The fun thing about postgresql is that just when you've got it figured
out, somebody will come along and improve it in such a way as to make
your previously gathered knowledge obsolete.  In a good way.

I imagine in a few years, hardly anyone using postgresql will remember
the ancient art of having either apostrophes in a row inside your
plpgsql functions...

Re: work_mem and shared_buffers

From
Erik Jones
Date:
On Nov 9, 2007, at 1:24 PM, Scott Marlowe wrote:

> On Nov 9, 2007 1:19 PM, Campbell, Lance <lance@uiuc.edu> wrote:
>> It is amazing, how after working with databases very actively for
>> over 8
>> years, I am still learning things.
>
> The fun thing about postgresql is that just when you've got it figured
> out, somebody will come along and improve it in such a way as to make
> your previously gathered knowledge obsolete.  In a good way.
>
> I imagine in a few years, hardly anyone using postgresql will remember
> the ancient art of having either apostrophes in a row inside your
> plpgsql functions...

Speaking of that devil, I started working with Postgres mere months
after that particular evil went away but we still have a good bit of
plpgsql with it in production.  I've been meaning to convert it and
clean it up for a while now.  Would you, or anybody, happen to know
of any scripts out there that I could grab to make a quick job, no
brains required of it?

Erik Jones

Software Developer | Emma®
erik@myemma.com
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



Re: work_mem and shared_buffers

From
"Scott Marlowe"
Date:
On Nov 9, 2007 2:38 PM, Erik Jones <erik@myemma.com> wrote:
>
> >
> > I imagine in a few years, hardly anyone using postgresql will remember
> > the ancient art of having either apostrophes in a row inside your
> > plpgsql functions...
>
> Speaking of that devil, I started working with Postgres mere months
> after that particular evil went away but we still have a good bit of
> plpgsql with it in production.  I've been meaning to convert it and
> clean it up for a while now.  Would you, or anybody, happen to know
> of any scripts out there that I could grab to make a quick job, no
> brains required of it?

Man, I can't think of any.  I'd assume you'd need to look for the
longest occurance of ' marks, and replace it with one field, say $1$
or something, then the next smaller set, with $2$ or something and so
on.  I imagine one could write a script to do it.  Luckily, we only
had one or two levels of ' marks in any of our stored procs, so it was
only a few minutes each time I edited one to switch it over.

Re: work_mem and shared_buffers

From
Cédric Villemain
Date:
Bill Moran a écrit :
> On Fri, 9 Nov 2007 12:08:57 -0600
> "Campbell, Lance" <lance@uiuc.edu> wrote:
>
>
>> How do you know when you should up the value of work_mem?  Just play
>> with the number.  Is there a query I could do that would tell me if
>> PostgreSql is performing SQL that could use more memory for sorting?
>>
>
> 8.2 and older, it can be difficult to know, and I don't have a specific
> recommendation.
>
>
I haven't use it in that context before, but perhaps inotify can be used
to catch postgresql usage of temp files. ( http://inotify.aiken.cz/ ,
http://inotify.aiken.cz/?section=incron&page=about&lang=en )

> 8.3 includes a parameter to log the usage of temporary files by Postgres.
> When a sort can't fit in the available memory, it uses a temp file, thus
> you could use this new feature to track when sorts don't fit in
> work_mem.
>
>


Re: work_mem and shared_buffers

From
Simon Riggs
Date:
On Fri, 2007-11-09 at 13:12 -0600, Scott Marlowe wrote:

> Note that my best time was at around 16 Meg work_mem.  This data set
> is MUCH bigger than 16 Meg, it's around 300-400 Meg.  But work_mem
> optimized out at 16 Meg.  Btw, I tried it going as high as 768 Meg,
> and it was still slower than 16M.

Remember that what you have shown is that for *this* dataset 16Mb is the
optimum value. It is not a recommended value for all cases.

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


Re: work_mem and shared_buffers

From
"Scott Marlowe"
Date:
On Nov 18, 2007 8:29 AM, Simon Riggs <simon@2ndquadrant.com> wrote:
> On Fri, 2007-11-09 at 13:12 -0600, Scott Marlowe wrote:
>
> > Note that my best time was at around 16 Meg work_mem.  This data set
> > is MUCH bigger than 16 Meg, it's around 300-400 Meg.  But work_mem
> > optimized out at 16 Meg.  Btw, I tried it going as high as 768 Meg,
> > and it was still slower than 16M.
>
> Remember that what you have shown is that for *this* dataset 16Mb is the
> optimum value. It is not a recommended value for all cases.

Actually, on this particular machine, it's held true for all the
datasets that are on it.

But I agree that it's only true for those particular datasets, and
more importantly, this machine.

But my real point was that if you haven't tested various settings on
your machine, you don't know if you're helping or hurting with various
changes to work_mem