Re: High checkpoint_segments - Mailing list pgsql-general

From Scott Marlowe
Subject Re: High checkpoint_segments
Date
Msg-id CAOR=d=0ZMEwnukXBo_Xu2yERcVd=i4v-HkQpcSrsyD4gUU2T=g@mail.gmail.com
Whole thread Raw
In response to Re: High checkpoint_segments  (Venkat Balaji <venkat.balaji@verse.in>)
Responses Re: High checkpoint_segments  (Venkat Balaji <venkat.balaji@verse.in>)
List pgsql-general
On Tue, Feb 14, 2012 at 10:57 PM, Venkat Balaji <venkat.balaji@verse.in> wrote:
>
> On Wed, Feb 15, 2012 at 1:35 AM, Jay Levitt <jay.levitt@gmail.com> wrote:
>>
>> We need to do a few bulk updates as Rails migrations.  We're a typical
>> read-mostly web site, so at the moment, our checkpoint settings and WAL are
>> all default (3 segments, 5 min, 16MB), and updating a million rows takes 10
>> minutes due to all the checkpointing.
>>
>> We have no replication or hot standbys.  As a consumer-web startup, with
>> no SLA, and not a huge database, and if we ever do have to recover from
>> downtime it's ok if it takes longer.. is there a reason NOT to always run
>> with something like checkpoint_segments = 1000, as long as I leave the
>> timeout at 5m?
>
>
> Still checkpoints keep occurring every 5 mins. Anyways
> checkpoint_segments=1000 is huge, this implies you are talking about
> 16MB * 1000 = 16000MB worth pg_xlog data, which is not advisable from I/O
> perspective and data loss perspective. Even in the most unimaginable case if
> all of these 1000 files get filled up in less than 5 mins, there are chances
> that system will slow down due to high IO and CPU.

As far as I know there is no data loss issue with a lot of checkpoint segments.

pgsql-general by date:

Previous
From: Chris Angelico
Date:
Subject: Re: Easy form of "insert if it isn't already there"?
Next
From: Bartosz Dmytrak
Date:
Subject: Re: Easy form of "insert if it isn't already there"?