Re: 1 or 2 servers for large DB scenario. - Mailing list pgsql-performance

From Heikki Linnakangas
Subject Re: 1 or 2 servers for large DB scenario.
Date
Msg-id 479C6142.9060502@enterprisedb.com
Whole thread Raw
In response to Re: 1 or 2 servers for large DB scenario.  (Matthew <matthew@flymine.org>)
List pgsql-performance
Matthew wrote:
> On Fri, 25 Jan 2008, Greg Smith wrote:
>> If you're seeing <100TPS you should consider if it's because you're
>> limited by how fast WAL commits can make it to disk.  If you really
>> want good insert performance, there is no substitute for getting a
>> disk controller with a good battery-backed cache to work around that.
>> You could just put the WAL xlog directory on a RAID-1 pair of disks to
>> accelerate that, you don't have to move the whole database to a new
>> controller.
>
> Hey, you *just* beat me to it.
>
> Yes, that's quite right. My suggestion was to move the whole thing, but
> Greg is correct - you only need to put the WAL on a cached disc system.
> That'd be quite a bit cheaper, I'd imagine.
>
> Another case of that small SSD drive being useful, I think.

PostgreSQL 8.3 will have "asynchronous commits" feature, which should
eliminate that bottleneck without new hardware, if you can accept the
loss of last few transaction commits in case of sudden power loss:

http://www.postgresql.org/docs/8.3/static/wal-async-commit.html

--
   Heikki Linnakangas
   EnterpriseDB   http://www.enterprisedb.com

pgsql-performance by date:

Previous
From: "Scott Marlowe"
Date:
Subject: Re: How do I bulk insert to a table without affecting read performance on that table?
Next
From: Dean Rasheed
Date:
Subject: Re: Slow set-returning functions