Re: Running out of memory on vacuum - Mailing list pgsql-general

From Ioana Danes
Subject Re: Running out of memory on vacuum
Date
Msg-id 1368550858.97672.YahooMailNeo@web164603.mail.gq1.yahoo.com
Whole thread Raw
In response to Re: Running out of memory on vacuum  (Scott Marlowe <scott.marlowe@gmail.com>)
Responses Re: Running out of memory on vacuum
List pgsql-general

Hi Scott,


I am running with the same configuration since 2 years ago and this only started to happen 2-3 month ago. 

I agree that in some conditions that configuration could not be enough for over 100 connections and in that case I
wouldexpect the system to complain during these queries but it does not happen.  

During the vacuum analyze there is nothing else going on on that server. Top shows only one process running and that is
postgreswith vacuum analyze (I also checked the pg_stat_activity). The few connections from the application are <IDLE>
andthe processes are sleeping so they should not use much memory. 
I think it is a memory leak when "too many" connections are open otherwise I can't explain. 

I will try to lower the parameters as you suggested but I have a feeling that this is only going to delay the
behavior...

Thanks a lot for your response,
ioana



----- Original Message -----
From: Scott Marlowe <scott.marlowe@gmail.com>
To: Ioana Danes <ioanasoftware@yahoo.ca>
Cc: Igor Neyman <ineyman@perceptron.com>; PostgreSQL General <pgsql-general@postgresql.org>
Sent: Tuesday, May 14, 2013 12:14:18 PM
Subject: Re: [GENERAL] Running out of memory on vacuum

On Tue, May 14, 2013 at 8:30 AM, Ioana Danes <ioanasoftware@yahoo.ca> wrote:
> Hi Igor,
>
> 1. I could remove the nightly vacuum but I think that is
>  not the cause. The vacuum is only catching the problem. If I ignore the
>  vacuum message for few days the system is gonna run out of memory on
> queries...

You should be able to run vacuum any time really, so yeah no great
need to kill it off right away.

> 2. There is no autovacuum running in the same time. I
> tried to run vacuum verbose manually and checked what else was going on
> on the server.
> I also reduced the maintenance work mem to 1 GB but I get the same error.

Set it something MUCH lower. Like 256MB or something.
Also set your shared_buffers lower if you can, 2G on an 8G machines (I
think that's what you have) is pretty high, and if you're running out
of memory, it's definitely not helping. By the time you run out of
memory the OS is likely swapping out your shared_buffers (it doesn't
know any better, it's just one more thing to swap out, and if some
part isn't getting accessed a lot it gets swapped out to make room)

How big is your swap? How much is getting used? How much memory can
you put in this machine? My laptop has 12G, my netbook has 8G, may
production servers have 512GB to 1TB of memory. 8GB was a typical
memory size for a medium sized DB server about 8 or so years ago.
Memory is cheap, downtime and troubleshooting are not.

> 3. I do use connection pooling. I have 1500 terminals selling and at busy
> times I might need more than 100 active connections but just
> occationally...

You can pool and they should just queue, depending on what method of
pooling you're using. That said I've seen perfectly reasonable
behaviour on larger machines witih ~500 or more connections (when
you've got HUNDREDS of servers needing a dozen persistent connections
each you just have to deal sometimes I guess).

So, tl;dr: Get more memory or lower your shared_buffers /
main_work_mem to something like 512MB each and see if that helps. Also
see what ELSE is using all your memory when this happens. A stack
trace is ok, but something like top with M for sorting is probably
more useful.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



pgsql-general by date:

Previous
From: Scott Marlowe
Date:
Subject: Re: Vacuum problem
Next
From: Scott Marlowe
Date:
Subject: Re: Running out of memory on vacuum