[oops, didn't hit "reply to list" first time, resending...]
On 6/15/10 9:02 AM, Steve Wampler wrote:
> Chris Browne wrote:
>> "jgardner@jonathangardner.net" <jgardner@jonathangardner.net> writes:
>>> My question is how can I configure the database to run as quickly as
>>> possible if I don't care about data consistency or durability? That
>>> is, the data is updated so often and it can be reproduced fairly
>>> rapidly so that if there is a server crash or random particles from
>>> space mess up memory we'd just restart the machine and move on.
>>
>> For such a scenario, I'd suggest you:
>>
>> - Set up a filesystem that is memory-backed. On Linux, RamFS or TmpFS
>> are reasonable options for this.
>>
>> - The complication would be that your "restart the machine and move
>> on" needs to consist of quite a few steps:
>>
>> - recreating the filesystem
>> - fixing permissions as needed
>> - running initdb to set up new PG instance
>> - automating any needful fiddling with postgresql.conf, pg_hba.conf
>> - starting up that PG instance
>> - creating users, databases, schemas, ...
How about this: Set up a database entirely on a RAM disk, then install a WAL-logging warm standby. If the production
computergoes down, you bring the warm standby online, shut it down, and use tar(1) to recreate the database on the
productionserver when you bring it back online. You have speed and you have near-100% backup.
Craig