Re: Hitting the nfile limit - Mailing list pgsql-hackers

From Michael Brusser
Subject Re: Hitting the nfile limit
Date
Msg-id DEEIJKLFNJGBEMBLBAHCEEKJDFAA.michael@synchronicity.com
Whole thread Raw
In response to Re: Hitting the nfile limit  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
> > I wonder how Postgres handles this situation.
> > (Or power outage, or any hard system fault, at this point)
> 
> Theoretically we should be able to recover from this without loss of
> committed data (assuming you were running with fsync on).  Is your QA
> person certain that the record in question had been written by a
> successfully-committed transaction?
> 
He's saying that his test script did not write any new records, only
updated existing ones. 
My uneducated guess on how update may work:
- create a clone record from the one to be updated and update some field(s) with given values.
- write new record to the database and delete the original.

If this is the case, could it be that somewhere along these lines
postgres ran into problem and lost the record completely?
But all this should be done in a transaction, so... I don't know...


As for fsync, we currently go with whatever default value is,
same for wal_sync_method.
Does anyone has an estimate on performance penalty related to
turning fsync on?

Michael.



pgsql-hackers by date:

Previous
From: Bruno Wolff III
Date:
Subject: Re: Compile error in current cvs (~1230 CDT July 4)
Next
From: Tom Lane
Date:
Subject: Proof-of-concept for initdb-time shared_buffers selection