Hi, Simon/tom,
Thanks for the reply.
It appears to me that we have to set fsync ON, as a badly corrupted database
by any chance in production line
will lead a serious problem.
However, when try the differnt 'wal_sync_method' setting, lead a quite
different operation time (open_datasync is best for speed).
But altering the commit_delay from 1 to 100000, I observed that there is no
time difference for the operation. Why is that? As our tests consists of
10000 small transactions which completed in 66 seconds, that is, about 160
transactions per second. When commit_delay set to 100000 (i.e., 0.1 second),
that in theory, shall group around 16 transactions into one commit, but
result is same from the repeated test. Am I mistaken something here?
Cheers and Regards,
Guoping
-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: 2006Äê4ÔÂ28ÈÕ 0:58
To: Simon Riggs
Cc: guoping.zhang@nec.com.au; pgsql-performance@postgresql.org; Guoping
Zhang (E-mail)
Subject: Re: [PERFORM] how unsafe (or worst scenarios) when setting
fsync
Simon Riggs <simon@2ndquadrant.com> writes:
> On Thu, 2006-04-27 at 16:31 +1000, Guoping Zhang wrote:
>> Can we set fsync OFF for the performance benefit, have the risk of only 5
>> minutes data loss or much worse?
> Thats up to you.
> fsync can be turned on and off, so you can make critical changes with
> fsync on, then continue with fsync off.
I think it would be a mistake to assume that the behavior would be
nice clean "we only lost recent changes". Things could get arbitrarily
badly corrupted if some writes make it to disk and some don't.
regards, tom lane