Thread: Controlling memory of session

Controlling memory of session

From
"James Im"
Date:
Hi,

I'm using Postgresql 8.1 on windows2000 and I have a hard time
understanding how to limit the memory of the sessions to 1 MB.

What I have right now is that each connection (opened with jdbc) takes
about 3MB (some take a little more, some a little less). I think that
this is a waste of memory (am-I wrong?).


I've read the doc
(http://www.postgresql.org/docs/8.1/static/runtime-config-resource.html)
for work_mem  and temp_buffers and they have this value:

work_mem = 1024
temp_buffers = 100

What am I missing to limit the memory taken by session to 1MB?

In addition I'd like to understand better temp_buffers. I never create
temporary tables but I wonder if postgresql does it behind the scene
when I do some big selects.

_________________________________________________________________
Del dine store filer uden problemer p� MSN Messenger:
http://messenger.msn.dk/


Re: Controlling memory of session

From
Richard Huxton
Date:
James Im wrote:
> Hi,
>
> I'm using Postgresql 8.1 on windows2000 and I have a hard time
> understanding how to limit the memory of the sessions to 1 MB.

Well, you don't explicitly, but see below.

> What I have right now is that each connection (opened with jdbc) takes
> about 3MB (some take a little more, some a little less). I think that
> this is a waste of memory (am-I wrong?).

Depends what you mean by "waste". If you never use those connections,
then yes. Otherwise, each connection does need a little space to work in
you know.

> I've read the doc
> (http://www.postgresql.org/docs/8.1/static/runtime-config-resource.html)
> for work_mem  and temp_buffers and they have this value:
>
> work_mem = 1024
> temp_buffers = 100
>
> What am I missing to limit the memory taken by session to 1MB?

You can't. In particular, work_mem is memory *per sort* so can be
several times that. If you're trying to get PG to run in 64MB or
something like that, I think you're going to be disappointed. In
particular, large result sets will tend to need more than 1MB at either
the server or client end of the connection.

Is there a particular problem you're trying to solve?

> In addition I'd like to understand better temp_buffers. I never create
> temporary tables but I wonder if postgresql does it behind the scene
> when I do some big selects.

Various operations will spill to disk in a similar manner to a temporary
table, but they're not tables per-se.

--
   Richard Huxton
   Archonet Ltd

Re: Controlling memory of session

From
Tom Lane
Date:
Richard Huxton <dev@archonet.com> writes:
> James Im wrote:
>> What am I missing to limit the memory taken by session to 1MB?

> You can't. In particular, work_mem is memory *per sort* so can be
> several times that. If you're trying to get PG to run in 64MB or
> something like that, I think you're going to be disappointed.

Yeah.  I think the working RAM per backend is approaching a megabyte
these days just for behind-the-scenes overhead (catalog caches and
so forth), before you expend even one byte on per-query structures
that work_mem would affect.

Something else to consider: I dunno what tool you were using on Windows
to look at memory usage or how it counts shared memory, but on Unix a
lot of process-monitoring tools tend to count shared memory against each
process touching that shared memory.  Which leads to artificially
bloated numbers.  The default PG shared memory block size these days is
order-of-10-megabytes I think; if a backend has touched any significant
fraction of that since it started, that could dwarf the backend's true
private workspace size.

If you're concerned about total memory footprint for a pile of backends,
usually the right answer is to put some connection-pooling software in
front of them, not try to hobble each backend to work in a tiny amount
of space.

            regards, tom lane

Re: Controlling memory of session

From
"James Im"
Date:
Richard Huxton wrote:
 > Is there a particular problem you're trying to solve?

yes I'm trying to know how many connections can open to the database
without running out of memory.  Ideally I would like to optimize stuff
so that I can open the maximum number of connection/session.

In total I can give 250Mb of RAM to Postgresql. It should not eat more
memory. This is what Im' trying to do.

_________________________________________________________________
Ta' p� udsalg �ret rundt p� MSN Shopping:  http://shopping.msn.dk  - her
finder du altid de bedste priser


Re: Controlling memory of session

From
Richard Huxton
Date:
James Im wrote:
> Richard Huxton wrote:
>  > Is there a particular problem you're trying to solve?
>
> yes I'm trying to know how many connections can open to the database
> without running out of memory.  Ideally I would like to optimize stuff
> so that I can open the maximum number of connection/session.

What - just as many idle connections as you can? You're not interesting
in running queries with them?

> In total I can give 250Mb of RAM to Postgresql. It should not eat more
> memory. This is what Im' trying to do.

It hasn't got that sort of hard limit facility. The best you can do is
to limit the maximum number of connections and then restrict the various
memory settings per-backend and shared. That should let you keep it in
the range you want.

--
   Richard Huxton
   Archonet Ltd

Re: Controlling memory of session

From
Jeff Davis
Date:
On Wed, 2007-01-17 at 11:02 +0000, James Im wrote:
> Hi,
>
> I'm using Postgresql 8.1 on windows2000 and I have a hard time
> understanding how to limit the memory of the sessions to 1 MB.
>
> What I have right now is that each connection (opened with jdbc) takes
> about 3MB (some take a little more, some a little less). I think that
> this is a waste of memory (am-I wrong?).

How are you measuring the memory usage? What is your shared_buffers
setting?

The shared_buffers are only allocated once for all processes, but
because there's no way to tell which process is using the memory
(because it's shared), "top" reports that each process is using the
entire shared buffers, plus all it's private memory.

So, when a new connection is established, it might not actually be using
3MB more than before.

Regards,
    Jeff Davis


Re: Controlling memory of session

From
"James Im"
Date:
Ok so the solution is to limit the number of connections. But it seems
that there is no good way to choose the ideal number of connections as I
don't know how much RAM will a connection use.

If a connection takes 3MB (on windows I see the process in the Process
Monitor, in Linux the RSS is more like 5MB) just after creation can I
limit its growth potential so that I know that it will not 6,10, 20MB
under some circumstances.

Not knowing how much RAM can take the server is annoying. You have to be
extra careful and scale the server down as you don't know what will happen.

Richard Huxton wrote:
>James Im wrote:
>>Richard Huxton wrote:
>>  > Is there a particular problem you're trying to solve?
>>
>>yes I'm trying to know how many connections can open to the database
>>without running out of memory.  Ideally I would like to optimize stuff
>>so that I can open the maximum number of connection/session.
>
>What - just as many idle connections as you can? You're not interesting
>in running queries with them?
>
>>In total I can give 250Mb of RAM to Postgresql. It should not eat more
>>memory. This is what Im' trying to do.
>
>It hasn't got that sort of hard limit facility. The best you can do is
>to limit the maximum number of connections and then restrict the various
>memory settings per-backend and shared. That should let you keep it in
>the range you want.
>

_________________________________________________________________
V�lg selv hvordan du vil kommunikere - skrift, tale, video eller billeder
med MSN Messenger:  http://messenger.msn.dk/  - her kan du det hele


Re: Controlling memory of session

From
Magnus Hagander
Date:
On Thu, Jan 18, 2007 at 07:41:27AM +0000, James Im wrote:
> Ok so the solution is to limit the number of connections. But it seems
> that there is no good way to choose the ideal number of connections as I
> don't know how much RAM will a connection use.
>
> If a connection takes 3MB (on windows I see the process in the Process
> Monitor, in Linux the RSS is more like 5MB) just after creation can I
> limit its growth potential so that I know that it will not 6,10, 20MB
> under some circumstances.

What figure exactly are you looking at? If you're just looking at the
values in task manager, that includes the shared pages from the
executable, as well as the shared buffer cache. You'll want to check the
counter for "Private bytes", and nothing else.
(As an example, my *notepad* shows 60Mb virtual size, 8Mb working set,
of which 6.2 is shared, but only 1.5Mb private bytes. So the different
figures you get are completely different)

Either way, the overhead is likely small enough not to really matter
once you factor in shared buffers and work_mem. Your mem usage will be
something along the line of:
shared_buffers + num_connections * (overhead+work_mem*number_of_sorts)


> Not knowing how much RAM can take the server is annoying. You have to be
> extra careful and scale the server down as you don't know what will happen.

The fact that work_mem is "per sort" and not "per backend" is going to
cause you more problems than the overhead in trying to figure out how
far you can do. The fact that there is no "work_mem per backend" is
certainly very annoying, but it's obviously hard to fix or it would've
been fixed already :-)

//Magnus

Re: Controlling memory of session

From
Martijn van Oosterhout
Date:
On Thu, Jan 18, 2007 at 11:52:12AM +0100, Magnus Hagander wrote:
> > Not knowing how much RAM can take the server is annoying. You have to be
> > extra careful and scale the server down as you don't know what will happen.
>
> The fact that work_mem is "per sort" and not "per backend" is going to
> cause you more problems than the overhead in trying to figure out how
> far you can do. The fact that there is no "work_mem per backend" is
> certainly very annoying, but it's obviously hard to fix or it would've
> been fixed already :-)

If you knew beforehand how much memory you were going to need, you
wouldn't need to run any queries because you'd know the answers already
:)

What exactly would you expect to happen? At some point you run a query
and it says "out of memory", even though your machine has masses of
memory left over? A bit like VM overcommit?

If you know your queries don't generally use more than one sort, it's
fairly easy to estimate memory usage. The best way to find out how many
backends you can handle is: simulation. Anything else is guesswork.

Have a nice day,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.

Attachment