Re: problem with lost connection while running long PL/R query - Mailing list pgsql-general

From Tom Lane
Subject Re: problem with lost connection while running long PL/R query
Date
Msg-id 15339.1368718822@sss.pgh.pa.us
Whole thread Raw
In response to Re: problem with lost connection while running long PL/R query  ("David M. Kaplan" <david.kaplan@ird.fr>)
Responses Re: problem with lost connection while running long PL/R query
List pgsql-general
"David M. Kaplan" <david.kaplan@ird.fr> writes:
> Thanks for the help.  You have definitely identified the problem, but I
> am still looking for a solution that works for me.  I tried setting
> vm.overcommit_memory=2, but this just made the query crash quicker than
> before, though without killing the entire connection to the database.  I
> imagine that this means that I really am trying to use more memory than
> the system can handle?

> I am wondering if there is a way to tell postgresql to flush a set of
> table lines out to disk so that the memory they are using can be
> liberated.

Assuming you don't have work_mem set to something unreasonably large,
it seems likely that the excessive memory consumption is inside your
PL/R function, and not the fault of Postgres per se.  You might try
asking in some R-related forums about how to reduce the code's memory
usage.

Also, if by "crash" this time you meant you got an "out of memory" error
from Postgres, there should be a memory map in the postmaster log
showing all the memory consumption Postgres itself is aware of.  If that
doesn't add up to a lot, it would be pretty solid proof that the problem
is inside R.  If there are any memory contexts that seem to have bloated
unreasonably, knowing which one(s) would be helpful information.

            regards, tom lane


pgsql-general by date:

Previous
From: Matt Brock
Date:
Subject: Re: Deploying PostgreSQL on CentOS with SSD and Hardware RAID
Next
From: Ramsey Gurley
Date:
Subject: Re: Tuning read ahead