Thread: PostgreSQL and memory usage

PostgreSQL and memory usage

From
"Dann Corbit"
Date:
I have a machine with 4 CPU's and 2 gigabytes of physical ram.
 
I would like to get PostgreSQL to use as much memory as possible.  I can't seem to get PostgreSQL to use more than 100 megabytes or so.
 
How can I optimize the use of PostgreSQL to get the maximum throughput in a configuration like that?
 
Are there any memory usage/tuning documents I can read?
 

Re: PostgreSQL and memory usage

From
Tom Lane
Date:
"Dann Corbit" <DCorbit@connx.com> writes:
> I have a machine with 4 CPU's and 2 gigabytes of physical ram.
> I would like to get PostgreSQL to use as much memory as possible.  I
> can't seem to get PostgreSQL to use more than 100 megabytes or so.

You should not assume that more is necessarily better.

In many practical situations, it's better to leave the majority of RAM
free for kernel disk caching.

            regards, tom lane

Re: PostgreSQL and memory usage

From
"Dann Corbit"
Date:
> -----Original Message-----
> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
> Sent: Monday, January 06, 2003 7:30 PM
> To: Dann Corbit
> Cc: pgsql-hackers@postgresql.org; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] PostgreSQL and memory usage
>
>
> "Dann Corbit" <DCorbit@connx.com> writes:
> > I have a machine with 4 CPU's and 2 gigabytes of physical
> ram. I would
> > like to get PostgreSQL to use as much memory as possible.  I can't
> > seem to get PostgreSQL to use more than 100 megabytes or so.
>
> You should not assume that more is necessarily better.
>
> In many practical situations, it's better to leave the
> majority of RAM free for kernel disk caching.

In any case, I would like to know what knobs and dials are available to
turn and what each of them means.
In at least one instance, the whole database should fit into memory.  I
would think that would be faster than any sort of kernel disk caching.

Re: PostgreSQL and memory usage

From
"scott.marlowe"
Date:
Hi Dann, I took hackers out of the list as this isn't really a hacking
issue, but I added in performance as this definitely applies there.

There are generally two areas of a database server you have to reconfigure
to use that extra memory.  The first is the kernel's shared memory
settings.

On a linux box that has sysconf installed this is quite easy.  If it isn't
installed, install it, as it's much easier to manipulate your kernel's
settings using sysctl than it is with editing rc.local.

First, get root.  Then, use 'sysctl -a|grep shm' to get a list of all the
shared memory settings handled by sysctl.

On a default redhat install, we'll get something like this:

kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.shmmax = 33554432

On my bigger box, it's been setup to have this:

kernel.shmmni = 4096
kernel.shmall = 32000000
kernel.shmmax = 256000000

To make changes that stick around, edit the /etc/sysctl.conf file to have
lines that look kinda like those above.  To make the changes to the
/etc/sysctl.conf file take effect, use 'sysctl -p'.

Next, as the postgres user, edit $PGDATA/postgresql.conf and increase the
number of shared buffers.  On most postgresql installations this number is
multiplied by 8k to get the amount of ram being allocated, since
postgresql allocates share buffers in blocks the same size as what it uses
on the dataset.  To allocate 256 Megs of buffers (that's what I use, seems
like a nice large chunk, but doesn't starve my other processes or system
file cache) set it to 32768.

Be careful how big you make your sort size.  I haven't seen a great
increase in speed on anything over 8 or 16 megs, while memory usage can
skyrocket under heavy parallel load with lots of sorts, since sort memory
is PER SORT maximum.

Then do the old pg_ctl reload and you should be cooking with gas.


Re: PostgreSQL and memory usage

From
Tom Lane
Date:
"scott.marlowe" <scott.marlowe@ihs.com> writes:
> Then do the old pg_ctl reload and you should be cooking with gas.

One correction: altering the number of shared buffers requires an actual
postmaster restart.

            regards, tom lane

Re: [PERFORM] PostgreSQL and memory usage

From
"Fred Moyer"
Date:
To put this usage of shared buffers in perspective would you mind kindly
let us know your total amount of system ram?  Without hearing what
percentage of memory used as shared buffers (assuming is the primary
application being using here)

I have always taken the 'more is better' approach with shared buffers but
would like to know what in terms of percentages other people are using.  I
have been using 50% of system ram (2 out of 4 gigs) for shared buffers
(and corresponding shmmax values) and it has been working great.  I
haven't tweaked the kernel yet to get more than 2 gigs shmmax so I can't
speak for a setup using over 50%.  I've been using between 256 and 512
megs sort memory which sounds like a little much from what I'm hearing
here.

Thanks

Fred

>
> Hi Dann, I took hackers out of the list as this isn't really a hacking
> issue, but I added in performance as this definitely applies there.
>
> There are generally two areas of a database server you have to
> reconfigure  to use that extra memory.  The first is the kernel's shared
> memory  settings.
>
> On a linux box that has sysconf installed this is quite easy.  If it
> isn't  installed, install it, as it's much easier to manipulate your
> kernel's  settings using sysctl than it is with editing rc.local.
>
> First, get root.  Then, use 'sysctl -a|grep shm' to get a list of all
> the  shared memory settings handled by sysctl.
>
> On a default redhat install, we'll get something like this:
>
> kernel.shmmni = 4096
> kernel.shmall = 2097152
> kernel.shmmax = 33554432
>
> On my bigger box, it's been setup to have this:
>
> kernel.shmmni = 4096
> kernel.shmall = 32000000
> kernel.shmmax = 256000000
>
> To make changes that stick around, edit the /etc/sysctl.conf file to
> have  lines that look kinda like those above.  To make the changes to
> the  /etc/sysctl.conf file take effect, use 'sysctl -p'.
>
> Next, as the postgres user, edit $PGDATA/postgresql.conf and increase
> the  number of shared buffers.  On most postgresql installations this
> number is  multiplied by 8k to get the amount of ram being allocated,
> since
> postgresql allocates share buffers in blocks the same size as what it
> uses  on the dataset.  To allocate 256 Megs of buffers (that's what I
> use, seems  like a nice large chunk, but doesn't starve my other
> processes or system  file cache) set it to 32768.
>
> Be careful how big you make your sort size.  I haven't seen a great
> increase in speed on anything over 8 or 16 megs, while memory usage can
> skyrocket under heavy parallel load with lots of sorts, since sort
> memory  is PER SORT maximum.
>
> Then do the old pg_ctl reload and you should be cooking with gas.
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo@postgresql.org so that your
> message can get through to the mailing list cleanly





Re: [PERFORM] PostgreSQL and memory usage

From
"scott.marlowe"
Date:
Oh yeah, sorry.  My box has 1.5 gig ram, but it is an application server
that runs things other than just postgresql. It also runs:

Apache
Real Server
OpenLDAP
Squid
Samba

with all those services fired up and running, as well as postgresql with
256 Megs of shared buffer, I have about 900 megs of cache and 100 megs
free ram.  Since a lot of data is flying off the hard drives at any given
time, favoring one service (database) over the others makes little sense
for me, and I've found that there was little or no performance gain from
256 Megs ram over say 128 meg or 64 meg.

We run about 50 databases averaging about 25megs each or so (backed up,
it's about 50 to 75 Megs on the machine's hard drives) so there's no
way for ALL the data to fit into memory.

On Tue, 7 Jan 2003, Fred Moyer wrote:

> To put this usage of shared buffers in perspective would you mind kindly
> let us know your total amount of system ram?  Without hearing what
> percentage of memory used as shared buffers (assuming is the primary
> application being using here)
>
> I have always taken the 'more is better' approach with shared buffers but
> would like to know what in terms of percentages other people are using.  I
> have been using 50% of system ram (2 out of 4 gigs) for shared buffers
> (and corresponding shmmax values) and it has been working great.  I
> haven't tweaked the kernel yet to get more than 2 gigs shmmax so I can't
> speak for a setup using over 50%.  I've been using between 256 and 512
> megs sort memory which sounds like a little much from what I'm hearing
> here.