Re: My Experiment of PG crash when dealing with huge amount of data - Mailing list pgsql-general

From Michael Paquier
Subject Re: My Experiment of PG crash when dealing with huge amount of data
Date
Msg-id CAB7nPqSuLujfaF6rNHDSqiA8b_rr0reSi_mS+PenS6ueS1emvw@mail.gmail.com
Whole thread Raw
In response to My Experiment of PG crash when dealing with huge amount of data  (高健 <luckyjackgao@gmail.com>)
List pgsql-general
On Fri, Aug 30, 2013 at 6:10 PM, 高健 <luckyjackgao@gmail.com> wrote:
> In log, I can see the following:
> LOG:  background writer process (PID 3221) was terminated by signal 9:
> Killed
Assuming that no users on your server manually killed this process, or
that no maintenance task you implemented did that, this looks like the
Linux OOM killer because of a memory overcommit. Have a look here for
more details:
http://www.postgresql.org/docs/current/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT
So have a look at dmesg to confirm that, then you could use one of the
strategies described in the docs. Also, as you have been doing a bulk
INSERT, you should as well increase temporarily checkpoint_segments to
reduce the pressure on the background writer by reducing the number of
checkpoints happening. This will also make your data load faster.
--
Michael


pgsql-general by date:

Previous
From: Michael Paquier
Date:
Subject: Re: Using of replication by initdb for both nodes?
Next
From: vibhuti nataraj
Date:
Subject: Unable to CREATE SCHEMA and INSERT data in table in that schema in same EXECUTE