Re: explanation of some configs - Mailing list pgsql-performance

From Robert Haas
Subject Re: explanation of some configs
Date
Msg-id 603c8f070902090938j12ae9d59p57c675774a4485c8@mail.gmail.com
Whole thread Raw
In response to Re: explanation of some configs  (justin <justin@emproshunts.com>)
List pgsql-performance
On Mon, Feb 9, 2009 at 10:44 AM, justin <justin@emproshunts.com> wrote:
> Matthew Wakeling wrote:
>>
>> On Sat, 7 Feb 2009, justin wrote:
>>>
>>> In a big databases a checkpoint could get very large before time had
>>> elapsed and if server cashed all that work would be rolled back.
>>
>> No. Once you commit a transaction, it is safe (unless you play with fsync
>> or asynchronous commit). The size of the checkpoint is irrelevant.
>>
>> You see, Postgres writes the data twice. First it writes the data to the
>> end of the WAL. WAL_buffers are used to buffer this. Then Postgres calls
>> fsync on the WAL when you commit the transaction. This makes the transaction
>> safe, and is usually fast because it will be sequential writes on a disc.
>> Once fsync returns, Postgres starts the (lower priority) task of copying the
>> data from the WAL into the data tables. All the un-copied data in the WAL
>> needs to be held in memory, and that is what checkpoint_segments is for.
>> When that gets full, then Postgres needs to stop writes until the copying
>> has freed up the checkpoint segments again.
>>
>> Matthew
>>
> Well then we have conflicting instructions in places on wiki.postgresql.org
> which links to this
> http://www.varlena.com/GeneralBits/Tidbits/annotated_conf_e.html

Yes, I think the explanation of checkpoint_segments on that page is
simply wrong (though it could be true to a limited extent if you have
synchronous_commit turned off).

...Robert

pgsql-performance by date:

Previous
From: justin
Date:
Subject: Re: explanation of some configs
Next
From: Wei Yan
Date:
Subject: query slow only after reboot