On Thu, Mar 24, 2011 at 9:23 PM, Jeremy Palmer <JPalmer@linz.govt.nz> wrote:
> I’ve been getting database out of memory failures with some queries which
> deal with a reasonable amount of data.
>
> I was wondering what I should be looking at to stop this from happening.
>
> The typical messages I been getting are like this:
> http://pastebin.com/Jxfu3nYm
> The OS is:
>
> Linux TSTLHAPP01 2.6.32-29-server #58-Ubuntu SMP Fri Feb 11 21:06:51 UTC
> 2011 x86_64 GNU/Linux.
>
> It’s a running on VMWare and, has 2 CPU’s and 8GB of RAM. This VM is
> dedicated to PostgreSQL. The main OS parameters I have tuned are:
>
> work_mem = 200MB
That's a really big work_mem. I have mainline db servers with 128G of
ram that have work_mem set to 16M and that is still considered a
little high in my book. If you drop work_mem down to 1MB does the out
of memory go away? work_mem is how much memory EACH sort can use on
its own, if you have a plpgsql procedure that keeps running query
after query, it could use a LOT of memory really fast.