Thread: need help on memory allocation
Hi Team,
we are seeing idle sessions consuming memory in our database, could you please help me how much memory an idle session can use max and how can we find how much work_mem consuming for single process.
we are getting out of memory error,for this i'm asking above questions.
Regards,
Rambabu Vakada.
Rambabu V wrote: > we are seeing idle sessions consuming memory in our database, could you please help me > how much memory an idle session can use max and how can we find how much work_mem > consuming for single process. > > we are getting out of memory error,for this i'm asking above questions. Are you sure that you see the private memory of the process and not the shared memory common to all processes? An "idle" connection should not hav a lot of private memory. If you get OOM on the server, the log entry with the memory context dump might be useful information. Yours, Laurenz Albe
Hi Laurenz,
Any Update, this is continuously hitting our production database.
Regards,
Rambabu Vakada,
PostgreSQL DBA.
On Tue, Jan 23, 2018 at 6:12 PM, Rambabu V <ram.wissen@gmail.com> wrote:
Hi Laurenz,OOM error not recording in server level, it is only recording in our database logs.below is the error message:cat PostgreSQL-2018-01-23_060000.csv|grep FATAL 2018-01-23 06:08:01.684 UTC,"postgres","rpx",68034,"[local]",5a66d141.109c2,2," authentication",2018-01-23 06:08:01 UTC,174/89066,0,FATAL,28000," Peer authentication failed for user ""postgres""","Connection matched pg_hba.conf line 5: ""local all all peer map=supers""",,,,,,,,"" 2018-01-23 06:25:52.286 UTC,"postgres","rpx",22342,"[local]",5a66d570.5746,2," authentication",2018-01-23 06:25:52 UTC,173/107122,0,FATAL,28000," Peer authentication failed for user ""postgres""","Connection matched pg_hba.conf line 5: ""local all all peer map=supers""",,,,,,,,"" 2018-01-23 06:37:10.916 UTC,"portal_etl_app","rpx",31226,"10.50.13.151:41052", 5a66d816.79fa,1," authentication",2018-01-23 06:37:10 UTC,,0,FATAL,53200,"out of memory","Failed on request of size 78336.",,,,,,,,"" below log from /var/log messages:root@prp:~# cat /var/log/syslog*|grep 'out of memory'root@prp:~# cat /var/log/syslog*|grep errorroot@prp:~# cat /var/log/syslog*|grep warningroot@prp:~#$ free -mhtotal used free shared buffers cachedMem: 58G 58G 358M 16G 3.6M 41G-/+ buffers/cache: 16G 42GSwap: 9.5G 687M 8.9Gpostgresql.conf parametes:=====================work_mem = 256MB # min 64kBmaintenance_work_mem = 256MB # min 1MBshared_buffers = 16GB # min 128kBtemp_buffers = 16MB # min 800kBwal_buffers = 64MBeffective_cache_size = 64GBmax_connections = 600cat /etc/sysctl.conf|grep kernel#kernel.domainname = example.com#kernel.printk = 3 4 1 3kernel.shmmax = 38654705664kernel.shmall = 8388608ps -ef|grep postgres|grep idle|wc -l171ps -ef|grep postgres|wc -l206ps -ef|wc -l589Databse Size: 1.5 TBbelow is the htop output:----------------------------------- Mem[|||||||||||||||||||||||||||||||||||||||||||||||||||||||| |||||||||||17045/60382MB] Tasks: 250, 7 thr; 8 running Swp[|||||| 686/9765MB] Load average: 8.63 9.34 8.62Uptime: 52 days, 07:07:07PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command109063 postgres 20 0 16.7G 16.4G 16.3G S 0.0 27.8 39:55.61 postgres: test sss 10.20.2.228(55174) idle24910 postgres 20 0 16.7G 16.4G 16.3G S 0.0 27.8 27:45.35 postgres: testl sss 10.20.2.228(55236) idle115539 postgres 20 0 16.7G 16.4G 16.3G S 0.0 27.8 28:22.89 postgres: test sss 10.20.2.228(55184) idle9816 postgres 20 0 16.7G 16.4G 16.3G S 0.0 27.8 40:19.57 postgres: test sss 10.20.2.228(55216) idlePlease help us on this, how can we over come this OOM issue.Regards,Rambabu Vakada,PostgreSQL DBA,+91 9849137684.On Fri, Jan 19, 2018 at 3:37 PM, Laurenz Albe <laurenz.albe@cybertec.at> wrote:Rambabu V wrote:
> we are seeing idle sessions consuming memory in our database, could you please help me
> how much memory an idle session can use max and how can we find how much work_mem
> consuming for single process.
>
> we are getting out of memory error,for this i'm asking above questions.
Are you sure that you see the private memory of the process and not the
shared memory common to all processes?
An "idle" connection should not hav a lot of private memory.
If you get OOM on the server, the log entry with the memory context dump
might be useful information.
Yours,
Laurenz Albe
Am 23.01.2018 um 14:59 schrieb Rambabu V: > total used free shared buffers cached > Mem: 58G 58G 358M 16G 3.6M 41G > -/+ buffers/cache: 16G 42G > Swap: 9.5G 687M 8.9G > > *postgresql.conf parametes:* > *=====================* > work_mem = 256MB# min 64kB > maintenance_work_mem = 256MB# min 1MB > shared_buffers = 16GB# min 128kB > temp_buffers = 16MB# min 800kB > wal_buffers = 64MB > effective_cache_size = 64GB > max_connections = 600 > how many active concurrent connections do you have? With work_mem = 256MB and 600 active connections and only 1 allocation of work_mem per connection you will need more than 150GB of RAM. With other words: you should lowering work_mem and/or max_connections. Regards, Andreas -- 2ndQuadrant - The PostgreSQL Support Company. www.2ndQuadrant.com
On Tue, 2018-01-23 at 19:29 +0530, Rambabu V wrote: > Any Update, this is continuously hitting our production database. > > > OOM error not recording in server level, it is only recording in our database logs. > > > > below is the error message: > > > > cat PostgreSQL-2018-01-23_060000.csv|grep FATAL > > 2018-01-23 06:08:01.684 UTC,"postgres","rpx",68034,"[local]",5a66d141.109c2,2,"authentication",2018-01-23 06:08:01 UTC,174/89066,0,FATAL,28000,"Peerauthentication failed for user ""postgres""","Connection matched pg_hba.conf line 5: ""local all all peer map=supers""",,,,,,,,"" > > 2018-01-23 06:25:52.286 UTC,"postgres","rpx",22342,"[local]",5a66d570.5746,2,"authentication",2018-01-23 06:25:52 UTC,173/107122,0,FATAL,28000,"Peerauthentication failed for user ""postgres""","Connection matched pg_hba.conf line 5: ""local all all peer map=supers""",,,,,,,,"" > > 2018-01-23 06:37:10.916 UTC,"portal_etl_app","rpx",31226,"10.50.13.151:41052",5a66d816.79fa,1,"authentication",2018-01-2306:37:10 UTC,,0,FATAL,53200,"outof memory","Failed on request of size 78336.",,,,,,,,"" > > > > $ free -mh > > total used free shared buffers cached > > Mem: 58G 58G 358M 16G 3.6M 41G > > -/+ buffers/cache: 16G 42G > > Swap: 9.5G 687M 8.9G > > > > postgresql.conf parametes: > > ===================== > > work_mem = 256MB # min 64kB > > maintenance_work_mem = 256MB # min 1MB > > shared_buffers = 16GB # min 128kB > > temp_buffers = 16MB # min 800kB > > wal_buffers = 64MB > > effective_cache_size = 64GB > > max_connections = 600 It would be interesting to know the output from sysctl vm.overcommit_memory sysctl vm.overcommit_ratio Also interesting: sar -r 1 1 I think that max_connections = 600 is way to high. Are you running large, complicated queries on that machine? That could be a problem with such a high connection limit. Is the machine dedicated to PostgreSQL? Yours, Laurenz Albe
On Tue, Jan 23, 2018 at 5:59 AM, Rambabu V <ram.wissen@gmail.com> wrote:
> > cat PostgreSQL-2018-01-23_060000.csv|grep FATAL
$ free -mhtotal used free shared buffers cachedMem: 58G 58G 358M 16G 3.6M 41G-/+ buffers/cache: 16G 42GSwap: 9.5G 687M 8.9G
This does not seem like it should be a problem. Is this data collected near the time of the failure?
work_mem = 256MB # min 64kBmax_connections = 600
These look pretty high, especially in combination. Why do you need that number of connections? Could you use a connection pooler instead? Or do just have an application bug (leaked connection handles) that needs to be fixed? Why do you need that amount of work_mem?
ps -ef|grep postgres|grep idle|wc -l171ps -ef|grep postgres|wc -l206
How close to the time of the problem was this recorded? How many of the idle are 'idle in transaction'?
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command109063 postgres 20 0 16.7G 16.4G 16.3G S 0.0 27.8 39:55.61 postgres: test sss 10.20.2.228(55174) idle24910 postgres 20 0 16.7G 16.4G 16.3G S 0.0 27.8 27:45.35 postgres: testl sss 10.20.2.228(55236) idle115539 postgres 20 0 16.7G 16.4G 16.3G S 0.0 27.8 28:22.89 postgres: test sss 10.20.2.228(55184) idle9816 postgres 20 0 16.7G 16.4G 16.3G S 0.0 27.8 40:19.57 postgres: test sss 10.20.2.228(55216) idle
How close to the time of the problem was this recorded? Nothing here seems to be a problem, because almost all the memory they have resident is shared memory.
It looks like all your clients decide to run a memory hungry query simultaneously, consume a lot of work_mem, and cause a problem. Then by the time you notice the problem and start collecting information, they are done and things are back to normal.
Cheers,
Jeff
Hi Rambabu, If you are finding some <IDLE> sessions then of course your database is perfectly alright. As <IDLE> sessions won't consume any memory. Kindly specify the issue briefly. Regards, Pavan -- Sent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html
On Tue, Jan 23, 2018 at 10:54:01PM -0700, pavan95 wrote: > If you are finding some <IDLE> sessions then of course your database is > perfectly alright. As <IDLE> sessions won't consume any memory. Those have a cost as well when building transaction snapshots. Too much of them is no good either, let's not forget that. -- Michael
Attachment
Then we should find like if there are any idle sessions with uncommitted transactions. Those might be the culprits. Regards, Pavan -- Sent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html
Hi, The following talk describes an issue with how Linux may handle memory allocation for Postgres. The issue may cause many hundreds of megabytes not being released in some cases. PostgreSQL and RAM usage [Feb 27, 2017] https://www.youtube.com/watch?v=EgQCxERi35A see between minutes 33 and 39 of the talk Regards, Vitaliy On 18/01/2018 17:25, Rambabu V wrote: > Hi Team, > > we are seeing idle sessions consuming memory in our database, could > you please help me how much memory an idle session can use max and how > can we find how much work_mem consuming for single process. > > we are getting out of memory error,for this i'm asking above questions. > > > Regards, > > Rambabu Vakada.