Thread: Calculating memory allocaiton per process

Calculating memory allocaiton per process

From
David Kerr
Date:
Howdy,

Is there a doc somewhere that has a formula for how much memory PG backend process will use?

I'm looking to get something like total_mem = max_connections * ( work_mem + temp_buffers )
// I know it's more complicated than that, which is why I'm asking =)

Something similar to Table 17-2 here: http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html
would be awesome.

Dave

Re: Calculating memory allocaiton per process

From
Jerry Sievers
Date:
David Kerr <dmk@mr-paradox.net> writes:

> Howdy,
>
> Is there a doc somewhere that has a formula for how much memory PG
> backend process will use?
>
> I'm looking to get something like total_mem = max_connections * (
> work_mem + temp_buffers ) // I know it's more complicated than that,
> which is why I'm asking =)

Depends on your query complexity, load distribution across concurrent
sessions and session lifetime.

work_mem will, in cases of queries having multiple sort nodes, have to
be counted multiple times on behalf of a single backend.

Some observation of the running system can be your best bet.

HTH

> Something similar to Table 17-2 here:
> http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html
> would be awesome.
>
> Dave

--
Jerry Sievers
Postgres DBA/Development Consulting
e: gsievers19@comcast.net
p: 305.321.1144

Re: Calculating memory allocaiton per process

From
David Kerr
Date:
On Thu, Apr 14, 2011 at 03:00:07PM -0400, Jerry Sievers wrote:
- David Kerr <dmk@mr-paradox.net> writes:
-
- > Howdy,
- >
- > Is there a doc somewhere that has a formula for how much memory PG
- > backend process will use?
- >
- > I'm looking to get something like total_mem = max_connections * (
- > work_mem + temp_buffers ) // I know it's more complicated than that,
- > which is why I'm asking =)
-
- Depends on your query complexity, load distribution across concurrent
- sessions and session lifetime.
-
- work_mem will, in cases of queries having multiple sort nodes, have to
- be counted multiple times on behalf of a single backend.
-
- Some observation of the running system can be your best bet.
-
- HTH

Yeah, that's the complication that I knew about (but am still not able to
fully 'get', let along vocalize).

Are there no rules of thumb or upper bounds to help estimate total memory usage?

Thanks

Dave