Re: Possible causes for database corruption and solutions - Mailing list pgsql-general
From | Greg Smith |
---|---|
Subject | Re: Possible causes for database corruption and solutions |
Date | |
Msg-id | 4B285977.5080309@2ndquadrant.com Whole thread Raw |
In response to | Possible causes for database corruption and solutions (Michael Clark <codingninja@gmail.com>) |
Responses |
Re: Possible causes for database corruption and solutions
Re: Possible causes for database corruption and solutions |
List | pgsql-general |
Michael Clark wrote: > The solution to the problem seemed to be to change the value for the > wal_sync_method setting to fsync_writethrough from the default of fsync. I was surprised recently to discover the default wasn't fsync_writethrough on that platform, because it probably should be. There is no other safe mode to run PostgreSQL in OS X with. If you don't invoke the write-through cache flushing code, you can expect databases to get regularly corrupted if people do things like lose power in the middle of writing something, exactly as you're seeing. > Secondly, I ask about an alternative solution to the corruption > problem because with preliminary testing we have seen a significant > degradation in performance. So far the two operations we have noted > are database creation and database restores. For the restore case, you might get a good sized boost in performance without introducing a risk of corruption by turning off the synchronous_commit parameter. That will put you in a position where you can have a committed transaction not actually be on disk if there's a crash or sudden power outage, but you won't get an actual corruption in that case. So fsync_writethough plus synchronous_commit=off should be no less safe than what you've got now, but probably not as fast as what you're used to. As already pointed out, there is a trade-off here you can't bargain with: you can either have your data completely safe, or you can execute quickly, but you can't do both. Robust data integrity slows things down and there's little you can do about it without buying hardware targeted to improve on that. The database creation issue just came up on one of the lists here the other day as being particularly slow in the situation you're in, and that parameter change doesn't help there. There's been some design change suggestions around that to improve the situation, but you're not likely to see those in the server code for a year or more. > I should note here that we have not tuned PG at all. You could probably see a good sized performance increase just from increasing checkpoint_segments a bit from its default (3). Since it sounds like you're trying to keep your product's disk space footprint under control, increasing that to around 10 would probably as high as you want to go. You can't really increase shared_buffers a lot on your platform lest your users get stuck with weird problems where the server won't start, from what I hear OS X is fairly hostile to the kernel adjustments you need to do in order to support that. There's a general intro to things you might tune in the postgresql.conf at http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server None of those are going to help you out with slow database creation, you might be able to pull down the restore times by tweaking some of the parameters there upwards. A large number of the tunables recommend to tweak there mainly impact query execution time. -- Greg Smith 2ndQuadrant Baltimore, MD PostgreSQL Training, Services and Support greg@2ndQuadrant.com www.2ndQuadrant.com
pgsql-general by date: