Thread: PostgreSQL memory usage

PostgreSQL memory usage

From
Alexander Pyhalov
Date:
Hello.

After looking at my DBMS server for some time I've understood that I don't understand what was going on...

A server has 48 GB RAM. shared_buffers is set to 12GB, work_mem - to 32MB, pgbouncer in transaction mode is used to
connectpool (pool size 80) to PostgreSQL 10.5 server.  

I see that at some point several postgresql backends start consuming about 16  GB RAM. If we account for
shared_buffers,it meens 4 GB RAM for private backend memory. How can we achieve such numbers? I don't see any
long-running(or complex) queries (however, there could be long-running transactions and queries to large partitioned
tables).But how could they consume 512* work_mem memory?    

С уважением,
Александр Пыхалов,
программист отдела телекоммуникационной инфраструктуры
управления информационно-коммуникационной инфраструктуры ЮФУ




Re: PostgreSQL memory usage

From
Luca Ferrari
Date:
On Wed, Oct 16, 2019 at 6:30 PM Alexander Pyhalov <alp@sfedu.ru> wrote:
> I see that at some point several postgresql backends start consuming about 16  GB RAM. If we account for
shared_buffers,it meens 4 GB RAM for private backend memory. How can we achieve such numbers? I don't see any
long-running(or complex) queries (however, there could be long-running transactions and queries to large partitioned
tables).But how could they consume 512* work_mem memory? 

I'm not sure they ae consuming 512 times the work_memory, I mean there
is a whole lot of stuff a process can allocate, and it requires to dig
into the process memory map (something I'm not good at!) to understand
it.
For sure, a single process (backend) can consume one time work_memory
per "complex node" in a query plan, that is it can consume multiple
times the work_memory value if that is available.

Luca



Re: PostgreSQL memory usage

From
Laurenz Albe
Date:
Alexander Pyhalov wrote:
> After looking at my DBMS server for some time I've understood that I don't understand what was going on...
> 
> A server has 48 GB RAM. shared_buffers is set to 12GB, work_mem - to 32MB, pgbouncer
> in transaction mode is used to connect pool (pool size 80) to PostgreSQL 10.5 server. 
> 
> I see that at some point several postgresql backends start consuming about 16  GB RAM.
> If we account for shared_buffers, it meens 4 GB RAM for private backend memory.
> How can we achieve such numbers? I don't see any long-running (or complex) queries
> (however, there could be long-running transactions and queries to large partitioned tables).
> But how could they consume 512* work_mem memory?

A wild guess: you are using large data items, or you have PostGIS installed,
which is known to use lots of memory for certain operations, which is not
limited by "work_mem".

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




RE: PostgreSQL memory usage

From
Alexander Pyhalov
Date:
No, there are no long text or bytea fields, just int, bigint, ctime, limited varchar types.
However, one table has 2250 partitions and I've seen several selects to this table (based on the  primary key of
individualpartitions) with a huge IN ()  list (about 500 keys). Don't expect this to be efficient, but unsure that
thesequeries caused such memory consumption. 

С уважением,
Александр Пыхалов,
программист отдела телекоммуникационной инфраструктуры
управления информационно-коммуникационной инфраструктуры ЮФУ


________________________________________
От: Laurenz Albe <laurenz.albe@cybertec.at>
Отправлено: 17 октября 2019 г. 14:09
Кому: Alexander Pyhalov; pgsql-general@lists.postgresql.org
Тема: Re: PostgreSQL memory usage

Alexander Pyhalov wrote:
> After looking at my DBMS server for some time I've understood that I don't understand what was going on...
>
> A server has 48 GB RAM. shared_buffers is set to 12GB, work_mem - to 32MB, pgbouncer
> in transaction mode is used to connect pool (pool size 80) to PostgreSQL 10.5 server.
>
> I see that at some point several postgresql backends start consuming about 16  GB RAM.
> If we account for shared_buffers, it meens 4 GB RAM for private backend memory.
> How can we achieve such numbers? I don't see any long-running (or complex) queries
> (however, there could be long-running transactions and queries to large partitioned tables).
> But how could they consume 512* work_mem memory?

A wild guess: you are using large data items, or you have PostGIS installed,
which is known to use lots of memory for certain operations, which is not
limited by "work_mem".

Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com




Re: PostgreSQL memory usage

From
Tom Lane
Date:
Alexander Pyhalov <alp@sfedu.ru> writes:
> However, one table has 2250 partitions and I've seen several selects to this table (based on the  primary key of
individualpartitions) with a huge IN ()  list (about 500 keys). Don't expect this to be efficient, but unsure that
thesequeries caused such memory consumption. 

Yeah, it could be that it's just the number of partitions involved in
the query that's causing the memory bloat.  v10 is not bright about
pruning uninteresting partitions from a query (v12 is better), so
there will be 2250 tables involved so far as the planner and executor
are concerned.  And then if you've got a large/complex WHERE clause,
that's probably copied and modified for each of those tables :-(.

We don't generally recommend having more than a hundred or so partitions,
because of issues like this.  v12 is starting to make it feasible to have
more, but only in limited use-cases.

            regards, tom lane