Re: DB running out of memory issues after upgrade - Mailing list pgsql-general
From | Nagaraj Raj |
---|---|
Subject | Re: DB running out of memory issues after upgrade |
Date | |
Msg-id | 1368247121.4687599.1582049408995@mail.yahoo.com Whole thread Raw |
In response to | Re: DB running out of memory issues after upgrade (Tomas Vondra <tomas.vondra@2ndquadrant.com>) |
Responses |
Re: DB running out of memory issues after upgrade
Re: DB running out of memory issues after upgrade Re: DB running out of memory issues after upgrade Re: DB running out of memory issues after upgrade Re: DB running out of memory issues after upgrade Re: DB running out of memory issues after upgrade Re: DB running out of memory issues after upgrade Re: DB running out of memory issues after upgrade Re: DB running out of memory issues after upgrade |
List | pgsql-general |
Below are the same configurations ins .conf file before and after updagrade
show max_connections; = 1743
show shared_buffers = "4057840kB"
show effective_cache_size = "8115688kB"
show maintenance_work_mem = "259MB"
show checkpoint_completion_target = "0.9"
show wal_buffers = "16MB"
show default_statistics_target = "100"
show random_page_cost = "1.1"
show effective_io_concurrency =" 200"
show work_mem = "4MB"
show min_wal_size = "256MB"
show max_wal_size = "2GB"
show max_worker_processes = "8"
show max_parallel_workers_per_gather = "2"
here is some sys logs,
2020-02-16 21:01:17 UTC [-]The database process was killed by the OS due to excessive memory consumption.
2020-02-16 13:41:16 UTC [-]The database process was killed by the OS due to excessive memory consumption.
I identified one simple select which consuming more memory and here is the query plan,
"Result (cost=0.00..94891854.11 rows=3160784900 width=288)"
" -> Append (cost=0.00..47480080.61 rows=3160784900 width=288)"
" -> Seq Scan on msghist (cost=0.00..15682777.12 rows=3129490000 width=288)"
" Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)"
" -> Seq Scan on msghist msghist_1 (cost=0.00..189454.50 rows=31294900 width=288)"
" Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)"
Thanks,
On Tuesday, February 18, 2020, 09:59:37 AM PST, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade.
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!!
>
This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(
The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.
What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.
And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do
(gdb) p MemoryContextStats(TopMemoryContext)
(gdb) q
and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.
It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade.
>
>spec: RAM 16gb,4vCore
>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!!
>
This bug report (in fact, we don't know if it's a bug, but OK) is
woefully incomplete :-(
The server log is mostly useless, unfortunately - it just says a bunch
of processes were killed (by OOM killer, most likely) so the server has
to restart. It tells us nothing about why the backends consumed so much
memory etc.
What would help us is knowing how much memory was the backend (killed by
OOM) consuming, which should be in dmesg.
And then MemoryContextStats output - you need to connect to a backend
consuming a lot of memory using gdb (before it gets killed) and do
(gdb) p MemoryContextStats(TopMemoryContext)
(gdb) q
and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.
It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
pgsql-general by date: