Re: work_mem RAM Accounting in PostgreSQL - Mailing list pgsql-general

From Laurenz Albe
Subject Re: work_mem RAM Accounting in PostgreSQL
Date
Msg-id cec733623111083dbd4d35ddc3950be9c02968cf.camel@cybertec.at
Whole thread Raw
In response to work_mem RAM Accounting in PostgreSQL  (Alexandru Lazarev <alexandru.lazarev@gmail.com>)
Responses Re: work_mem RAM Accounting in PostgreSQL
List pgsql-general
On Wed, 2024-11-13 at 21:09 +0200, Alexandru Lazarev wrote:
> I have some questions regarding how the "work_mem" parameter affects the overall RAM
> usage of PostgreSQL processes within a physical host or container.
>
> Each backend process during SQL execution may allocate N * "work_mem" simultaneously.
> For example, if "work_mem" is set to 32MB and N=5 (i.e. 5 simultaneous and/or sequential
> ORDER and hash operations), and the initial RAM usage (RSS - Resident Set Size) of the
> backend is 10MB, I would expect the backend process to use 160MB (32MB * 5) + 10MB,
> resulting in a total RAM usage of 170MB.

The limit for a hash is hash_mem_multiplier * work_mem.

> My questions are as follows:
>
> 1. What happens to the allocated "work_mem" after the execution of query nodes? Are
>    these memory allocations freed?

Yes.

> 2. If they are freed, do they remain in the RSS of the PostgreSQL backend?

They may, because the C library can choose not to actually free all the memory,
but retain some to serve future malloc() requests more efficiently.

> 3. From various sources, I understand that these allocations are freed after
>    each node execution due to memory contexts, but they might remain in some sort
>    of backend memory pool for future reuse. Is this correct?

I am not sure what you mean, but perhaps what I wrote above.

> 4. If so, will this memory be accounted for as used RAM on my Linux/Container
>    system after the backend returns to an idle state (e.g., connection pooling)?

Certainly.

> Additionally: If the above is true, and my PostgreSQL host or container is limited
> to 16GB of RAM, what would happen if I have 100 pooled connections, each gradually
> allocating those 160MB? Will this memory be reclaimed (if I understood it correctly
> as a kind of inactive anon mem), or will the OOM Killer be triggered at some point
> (because it is real allocated memory)?

The backends won't keep that much memory allocated, so you need not worry.

100 connections are a lot.  With efficient pooling, you could have fewer connections
and use your resources more efficiently.

Yours,
Laurenz Albe



pgsql-general by date:

Previous
From: "David G. Johnston"
Date:
Subject: Re: DB wal file disabled --_Query
Next
From: Rob Sargent
Date:
Subject: Re: postgresql-17.0-1 Application - silent installation Issue